1. Should have worked with AWS, Dockers and Kubernetes.
2. Should have worked with a scripting language.
3. Should know how to monitor system performance, CPU, Memory.
4. Should be able to do troubleshooting.
5. Should have knowledge of automated deployment
6. Proficient in one programming knowledge - python preferred.
About 91social
About
Connect with the team
Similar jobs
Numerator is looking for an experienced, talented and quick-thinking DevOps Manager to join our team and work with the Global DevOps groups to keep infrastructure up to date and continuously advancing. This is a unique opportunity where you will get the chance to work on the infrastructure of both established and greenfield products. Our technology harnesses consumer-related data in many ways including gamified mobile apps, sophisticated web crawling and enhanced Deep Learning algorithms to deliver an unmatched view of the consumer shopping experience. As a member of the Numerator DevOps Engineering team, you will make an immediate impact as you help build out and expand our technology platforms from on-premise to the cloud across a wide range of software ecosystems. Many of your daily tasks and engagement with applications teams will help shape how new projects are delivered at scale to meet our clients demands. This role requires a balance between hands-on infrastructure-as-code deployments with application teams as well as working with Global DevOps Team to roll out new initiatives. What you will get to do
|
Requirements |
Nice to have
|
About Us
At Digilytics™, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients’ business.
Founded by Arindom Basu (Founding member of Infosys Consulting), the leadership of Digilytics™ is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics™ leadership is focused on building a values-first firm that will stand the test of time.
We are currently focused on developing a product, Revel FS, to revolutionise loan origination for mortgages and secured lending. We are also developing a second product, Revel CI, focused on improving trade (secondary) sales to consumer industry clients like auto and FMCG players.
The leadership strongly believes in the ethos of enabling intelligence across the organization. Digiliytics AI is headquartered in London, with a presence across India.
Website: http://www.digilytics.ai">www.digilytics.ai
- Know about our product
- https://www.digilytics.ai/RevEL/Digilytics">Digilytics RelEL
- https://www.digilytics.ai/RevUP/">Digilytics RelUP
- What's it like working at Digilytics https://www.digilytics.ai/about-us.html">https://www.digilytics.ai/about-us.html
- Digilytics featured in Forbes: https://bit.ly/3zDQc4z">https://bit.ly/3zDQc4z
Responsibilities
- Experience with Azure services (Virtual machines, Containers, Databases, Security/Firewall, Function Apps etc)
- Hands-on experience on Kubernetes/Docker/helm.
- Deployment of java Builds & administration/configuration of Nginx/Reverse Proxy, Load balancer, Ms-SQL, Github, Disaster Recovery,
- Linux – Must have basic knowledge- User creation/deletion, ACL, LVM etc.
- CI/CD - Azure DevOps or any other automation tool like Terraform, Jenkins etc.
- Experience with SharePoint and O365 administration
- Azure/Kubernetes certification will be preferred.
- Microsoft Partnership experience is good to have.
- Excellent understanding of required technologies
- Good interpersonal skills and the ability to communicate ideas clearly at all levels
- Ability to work in unfamiliar business areas and to use your skills to create solutions
- Ability to both work in and lead a team and to deliver and accept peer review
- Flexible approach to working environment and hours to meet the needs of the business and clients
Must Haves:
- Hands-on experience on Kubernetes/Docker/helm.
- Experience on Azure/Aws or any other cloud provider.
- Linux & CI/CD tools knowledge.
Experience & Education:
- A start up mindset with proven experience working in both smaller and larger organizations having multicultural exposure
- Between 4-9 years of experience working closely with the relevant technologies, and developing world-class software and solutions
- Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Retail Consumer Services
- A bachelor's degree, or equivalent, in Software Engineering and Computer Science
Requirements:-
- Bachelor’s Degree or Master’s in Computer Science, Engineering,Software Engineering or a relevant field.
- Strong experience with Windows/Linux-based infrastructures, Linux/Unix administration.
- knowledge of Jira, Bitbucket, Jenkins, Xray, Ansible, Windows and .Net. as their Core Skill.
- Strong experience with databases such as SQL, MS SQL, MySQL, NoSQL.
- Knowledge of scripting languages such as Shell Scripting /Python/ PHP/Groovy, Bash.
- Experience with project management and workflow tools such as Agile, Jira / WorkFront etc.
- Experience with open-source technologies and cloud services.
- Experience in working with Puppet or Chef for automation and configuration.
- Strong communication skills and ability to explain protocol and processes with team and management.
- Experience in a DevOps Engineer role (or similar role)
- AExperience in software development and infrastructure development is a plus
Job Specifications:-
- Building and maintaining tools, solutions and micro services associated with deployment and our operations platform, ensuring that all meet our customer service standards and reduce errors.
- Actively troubleshoot any issues that arise during testing and production, catching and solving issues before launch.
- Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
- Deploy product updates as required while implementing integrations when they arise.
- Automate our operational processes as needed, with accuracy and in compliance with our security requirements.
- Specifying, documenting and developing new product features, and writing automating scripts. Manage code deployments, fixes, updates and related processes.
- Work with open-source technologies as needed.
- Work with CI and CD tools, and source control such as GIT and SVN.
- Lead the team through development and operations.
- Offer technical support where needed, developing software for our back-end systems.
About the client :
Asia’s largest global sports media property in history with a global broadcast to 150+ countries. As the world’s largest martial arts organization, they are a celebration of Asia’s greatest cultural treasure, and its deep-rooted Asian values of integrity, humility, honor, respect, courage, discipline, and compassion. Has achieved some of the highest TV ratings and social media engagement metrics across Asia with its unique brand of Asian values, world-class athletes, and world-class production. Broadcast partners include Turner Sports, Star India, TV Tokyo, Fox Sports, ABS-CBN, Astro, ClaroSports, Bandsports, Startimes, Premier Sports, Thairath TV, Skynet, Mediacorp, OSN, and more. Institutional investors include Sequoia Capital, Temasek Holdings, GIC, Iconiq Capital, Greenoaks Capital, and Mission Holdings. Currently has offices in Singapore, Tokyo, Los Angeles, Shanghai, Milan, Beijing, Bangkok, Manila, Jakarta, and Bangalore.
Position : Devops Engineer – SDE3
As part of the engineering team, you would be expected to have deep technology expertise with a passion for building highly scalable products. This is a unique opportunity where you can impact the lives of people across 150+ countries!
Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability, performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps engineers. Ensure industry standards are followed.
Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager, Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus
Job Dsecription: (8-12 years)
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Deep understanding of Kernel, Networking and OS fundamentals
○ Strong experience in writing helm charts.
○ Deep understanding of K8s.
○ Good knowledge in service mesh.
○ Good Database understanding
Notice Period: 30 day max
Are you the one? Quick self-discovery test:
- Love for the cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion: When was the last time you went to a remote gas station while on vacation and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on the cloud?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions to help move & improve infra from on-premise to cloud or help optimize cloud spend from one public cloud to the other.
- Be the first one to experiment on new-age cloud offerings, help define the best practice as a thought leader for cloud, automation & Dev-Ops, be a solution visionary and technology expert across multiple channels.
- Continually augment skills and learn new tech as the technology and client needs evolve
- Use your experience in the Google cloud platform, AWS, or Microsoft Azure to build hybrid-cloud solutions for customers.
- Provide leadership to project teams, and facilitate the definition of project deliverables around core Cloud-based technology and methods.
- Define tracking mechanisms and ensure IT standards and methodology are met; deliver quality results.
- Participate in technical reviews of requirements, designs, code, and other artifacts
- Identify and keep abreast of new technical concepts in the google cloud platform
- Security, Risk, and Compliance - Advise customers on best practices around access management, network setup, regulatory compliance, and related areas.
Accomplishment Set
- Passionate, persuasive, articulate Cloud professional capable of quickly establishing interest and credibility
- Good business judgment, a comfortable, open communication style, and a willingness and ability to work with customers and teams.
- Strong service attitude and a commitment to quality.
- Highly organised and efficient.
- Confident working with others to inspire a high-quality standard.
Experience :
- 4-8 years experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems and/OR Windows servers
- Specialize in one or two cloud deployment platforms: AWS, GCP
- Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine, API Gateway, AppSync and ServiceMesh)
- Experience in one or more scripting language-Python, Bash
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
- DevOps Technologies (AWS DevOps, Jenkins, Git, Maven)
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef, Packer
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
Education :
- Is Education overrated? Yes. We believe so. However there is no way to locate you otherwise. So unfortunately we might have to look for a Bachelor's or Master's degree in engineering from a reputed institute or you should be programming from 12. And the latter is better. We will find you faster if you specify the latter in some manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
- To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees!
- 3-8 years of experience with hands-on experience in Cloud Computing (AWS/GCP) and IT operational experience in a global enterprise environment.
- Good analytical, communication, problem solving, and learning skills.
- Knowledge on programming against cloud platforms such as Google Cloud Platform and lean development methodologies.
At Karza technologies, we take pride in building one of the most comprehensive digital onboarding & due-diligence platforms by profiling millions of entities and trillions of associations amongst them using data collated from more than 700 publicly available government sources. Primarily in the B2B Fintech Enterprise space, we are headquartered in Mumbai in Lower Parel with 100+ strong workforce. We are truly furthering the cause of Digital India by providing the entire BFSI ecosystem with tech products and services that aid onboarding customers, automating processes and mitigating risks seamlessly, in real-time and at fraction of the current cost.
A few recognitions:
- Recognized as Top25 startups in India to work with 2019 by LinkedIn
- Winner of HDFC Bank's Digital Innovation Summit 2020
- Super Winners (Won every category) at Tecnoviti 2020 by Banking Frontiers
- Winner of Amazon AI Award 2019 for Fintech
- Winner of FinTech Spot Pitches at Fintegrate Zone 2018 held at BSE
- Winner of FinShare 2018 challenge held by ShareKhan
- Only startup in Yes Bank Global Fintech Accelerator to win the account during the Cohort
- 2nd place Citi India FinTech Challenge 2018 by Citibank
- Top 3 in Viacom18's Startup Engagement Programme VStEP
What your average day would look like:
- Deploy and maintain mission-critical information extraction, analysis, and management systems
- Manage low cost, scalable streaming data pipelines
- Provide direct and responsive support for urgent production issues
- Contribute ideas towards secure and reliable Cloud architecture
- Use open source technologies and tools to accomplish specific use cases encountered within the project
- Use coding languages or scripting methodologies to solve automation problems
- Collaborate with others on the project to brainstorm about the best way to tackle a complex infrastructure, security, or deployment problem
- Identify processes and practices to streamline development & deployment to minimize downtime and maximize turnaround time
What you need to work with us:
- Proficiency in at least one of the general-purpose programming languages like Python, Java, etc.
- Experience in managing the IAAS and PAAS components on popular public Cloud Service Providers like AWS, Azure, GCP etc.
- Proficiency in Unix Operating systems and comfortable with Networking concepts
- Experience with developing/deploying a scalable system
- Experience with the Distributed Database & Message Queues (like Cassandra, ElasticSearch, MongoDB, Kafka, etc.)
- Experience in managing Hadoop clusters
- Understanding of containers and have managed them in production using container orchestration services.
- Solid understanding of data structures and algorithms.
- Applied exposure to continuous delivery pipelines (CI/CD).
- Keen interest and proven track record in automation and cost optimization.
Experience:
- 1-4 years of relevant experience
- BE in Computer Science / Information Technology
- Hands on experience in following is a must: Unix, Python and Shell Scripting.
- Hands on experience in creating infrastructure on cloud platform AWS is a must.
- Must have experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory and Chef.
- Must be good at these DevOps tools:
Version Control Tools: Git, CVS
Build Tools: Maven and Gradle
CI Tools: Jenkins
- Hands-on experience with Analytics tools, ELK stack.
- Knowledge of Java will be an advantage.
- Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.
- Ability to help debug and optimise code and automate routine tasks.
- Should be extremely good in communication
- Experience in dealing with difficult situations and making decisions with a sense of urgency.
- Experience in Agile and Jira will be an add on