Job Description :
- The engineer should be highly motivated, able to work independently, work and guide other engineers within and outside the team.
- The engineer should possess varied software skills in Shell scripting, Linux, Oracle database, WebLogic, GIT, ANT, Hudson, Jenkins, Docker and Maven
- Work is super fun, non-routine, and, challenging, involving the application of advanced skills in area of specialization.
Key responsibilities :
- Design, develop, troubleshoot and debug software programs for Hudson monitoring, installing Cloud Software, Infra and cloud application usage monitoring.
Required Knowledge :
- Source Configuration management, Docker, Puppet, Ansible, Application and Server Monitoring, AWS, Database Administration, Kubernetes, Log monitoring, CI/CD, Design and implement build, deployment, and configuration management, Improve infrastructure development, Scripting language.
- Good written and verbal communication skills
Qualification :
Education and Experience : Bachelors/Masters in Computer Science
Open for 24- 7 shifts

About Jungleworks
About
Connect with the team
Similar jobs
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Job Responsibilities:
- Managing and maintaining the efficient functioning of containerized applications and systems within an organization
- Design, implement, and manage scalable Kubernetes clusters in cloud or on-premise environments
- Develop and maintain CI/CD pipelines to automate infrastructure and application deployments, and track all automation processes
- Implement workload automation using configuration management tools, as well as infrastructure as code (IaC) approaches for resource provisioning
- Monitor, troubleshoot, and optimize the performance of Kubernetes clusters and underlying cloud infrastructure
- Ensure high availability, security, and scalability of infrastructure through automation and best practices
- Establish and enforce cloud security standards, policies, and procedures Work agile technologies
Primary Requirements:
- Kubernetes: Proven experience in managing Kubernetes clusters (min. 2-3 years)
- Linux/Unix: Proficiency in administering complex Linux infrastructures and services
- Infrastructure as Code: Hands-on experience with CM tools like Ansible, as well as the
- knowledge of resource provisioning with Terraform or other Cloud-based utilities
- CI/CD Pipelines: Expertise in building and monitoring complex CI/CD pipelines to
- manage the build, test, packaging, containerization and release processes of software
- Scripting & Automation: Strong scripting and process automation skills in Bash, Python
- Monitoring Tools: Experience with monitoring and logging tools (Prometheus, Grafana)
- Version Control: Proficient with Git and familiar with GitOps workflows.
- Security: Strong understanding of security best practices in cloud and containerized
- environments.
Skills/Traits that would be an advantage:
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team

About the job
Our goal
We are reinventing the future of MLOps. Censius Observability platform enables businesses to gain greater visibility into how their AI makes decisions to understand it better. We enable explanations of predictions, continuous monitoring of drifts, and assessing fairness in the real world. (TLDR build the best ML monitoring tool)
The culture
We believe in constantly iterating and improving our team culture, just like our product. We have found a good balance between async and sync work default is still Notion docs over meetings, but at the same time, we recognize that as an early-stage startup brainstorming together over calls leads to results faster. If you enjoy taking ownership, moving quickly, and writing docs, you will fit right in.
The role:
Our engineering team is growing and we are looking to bring on board a senior software engineer who can help us transition to the next phase of the company. As we roll out our platform to customers, you will be pivotal in refining our system architecture, ensuring the various tech stacks play well with each other, and smoothening the DevOps process.
On the platform, we use Python (ML-related jobs), Golang (core infrastructure), and NodeJS (user-facing). The platform is 100% cloud-native and we use Envoy as a proxy (eventually will lead to service-mesh architecture).
By joining our team, you will get the exposure to working across a swath of modern technologies while building an enterprise-grade ML platform in the most promising area.
Responsibilities
- Be the bridge between engineering and product teams. Understand long-term product roadmap and architect a system design that will scale with our plans.
- Take ownership of converting product insights into detailed engineering requirements. Break these down into smaller tasks and work with the team to plan and execute sprints.
- Author high-quality, highly-performance, and unit-tested code running on a distributed environment using containers.
- Continually evaluate and improve DevOps processes for a cloud-native codebase.
- Review PRs, mentor others and proactively take initiatives to improve our team's shipping velocity.
- Leverage your industry experience to champion engineering best practices within the organization.
Qualifications
Work Experience
- 3+ years of industry experience (2+ years in a senior engineering role) preferably with some exposure in leading remote development teams in the past.
- Proven track record building large-scale, high-throughput, low-latency production systems with at least 3+ years working with customers, architecting solutions, and delivering end-to-end products.
- Fluency in writing production-grade Go or Python in a microservice architecture with containers/VMs for over 3+ years.
- 3+ years of DevOps experience (Kubernetes, Docker, Helm and public cloud APIs)
- Worked with relational (SQL) as well as non-relational databases (Mongo or Couch) in a production environment.
- (Bonus: worked with big data in data lakes/warehouses).
- (Bonus: built an end-to-end ML pipeline)
Skills
- Strong documentation skills. As a remote team, we heavily rely on elaborate documentation for everything we are working on.
- Ability to motivate, mentor, and lead others (we have a flat team structure, but the team would rely upon you to make important decisions)
- Strong independent contributor as well as a team player.
- Working knowledge of ML and familiarity with concepts of MLOps
Benefits
- Competitive Salary
- Work Remotely
- Health insurance
- Unlimited Time Off
- Support for continual learning (free books and online courses)
- Reimbursement for streaming services (think Netflix)
- Reimbursement for gym or physical activity of your choice
- Flex hours
- Leveling Up Opportunities
You will excel in this role if
- You have a product mindset. You understand, care about, and can relate to our customers.
- You take ownership, collaborate, and follow through to the very end.
- You love solving difficult problems, stand your ground, and get what you want from engineers.
- Resonate with our core values of innovation, curiosity, accountability, trust, fun, and social good.
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Being involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Ensuring reliable operation of CI/ CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creating Docker files
- Creating Bash/ Python scripts for automation.
- Performing root cause analysis for production errors.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Job Description
Please connect me on Linkedin or share your Resume on shrashti jain
• 8+ years of overall experience and relevant of at least 4+ years. (Devops experience has be more when compared to the overall experience)
• Experience with Kubernetes and other container management solutions
• Should have hands on and good understanding on DevOps tools and automation framework
• Demonstrated hands-on experience with DevOps techniques building continuous integration solutions using Jenkins, Docker, Git, Maven
• Experience with n-tier web application development and experience in J2EE / .Net based frameworks
• Look for ways to improve: Security, Reliability, Diagnostics, and costs
• Knowledge of security, networking, DNS, firewalls, WAF etc
• Familiarity with Helm, Terraform for provisioning GKE,Bash/shell scripting
• Must be proficient in one or more scripting languages: Unix Shell, Perl, Python
• Knowledge and experience with Linux OS
• Should have working experience with monitoring tools like DataDog, Elk, and/or SPLUNK, or any other monitoring tools/processes
• Experience working in Agile environments
• Ability to handle multiple competing priorities in a fast-paced environment
• Strong Automation and Problem-solving skills and ability
• Experience of implementing and supporting AWS based instances and services (e.g. EC2, S3, EBS, ELB, RDS, IAM, Route53, Cloudfront, Elasticache).
•Very strong hands with Automation tools such Terraform
• Good experience with provisioning tools such as Ansible, Chef
• Experience with CI CD tools such as Jenkins.
•Experience managing production.
• Good understanding of security in IT and the cloud
• Good knowledge of TCP/IP
• Good Experience with Linux, networking and generic system operations tools
• Experience with Clojure and/or the JVM
• Understanding of security concepts
• Familiarity with blockchain technology, in particular Tendermint
About the company:
Tathastu, the next-generation innovation labs is Future Group’s initiative to provide a new-age retail experience - combining the physical with digital and enhancing it with data. We are creating next-generation consumer interactions by combining AI/ML, Data Science, and emerging technologies with consumer platforms.
The E-Commerce vertical under Tathastu has developed online consumer platforms for Future Group’s portfolio of retail brands -Easy day, Big Bazaar, Central, Brand factory, aLL, Clarks, Coverstory. Backed by our network of offline stores we have built a new retail platform that merges our Online & Offline retail streams. We use data to power all our decisions across our products and build internal tools to help us scale our impact with a small closely-knit team.
Our widespread store network, robust logistics, and technology capabilities have made it possible to launch a ‘2-Hour Delivery Promise’ on every product across fashion, food, FMCG, and home products for orders placed online through the Big Bazaar mobile app and portal. This makes Big Bazaar the first retailer in the country to offer instant home delivery on almost every consumer product ordered online.
Job Responsibilities:
- You’ll streamline and automate the software development and infrastructure management processes and play a crucial role in executing high-impact initiatives and continuously improving processes to increase the effectiveness of our platforms.
- You’ll translate complex use cases into discrete technical solutions in platform architecture, design and coding, functionality, usability, and optimization.
- You will drive automation in repetitive tasks, configuration management, and deliver comprehensive automated tests to debug/troubleshoot Cloud AWS-based systems and BigData applications.
- You’ll continuously discover, evaluate, and implement new technologies to maximize the development and operational efficiency of the platforms.
- You’ll determine the metrics that will define technical and operational success and constantly track such metrics to fine-tune the technology stack of the organization.
Experience: 4 to 8 Yrs
Qualification: B.Tech / MCA
Required Skills:
- Experience with Linux/UNIX systems administration and Amazon Web Services (AWS).
- Infrastructure as Code (Terraform), Kubernetes and container orchestration, Web servers (Nginx, Apache), Application Servers(Tomcat,Node.js,..), document stores and relational databases (AWS RDS-MySQL).
- Site Reliability Engineering patterns and visibility /performance/availability monitoring (Cloudwatch, Prometheus)
- Background in and happy to work hands-on with technical troubleshooting and performance tuning.
- Supportive and collaborative personality - ability to influence and drive progress with your peers
Our Technology Stack:
- Docker/Kubernetes
- Cloud (AWS)
- Python/GoLang Programming
- Microservices
- Automation Tools
Our Client is an IT infrastructure services company, focused and specialized in delivering solutions and services on Microsoft products and technologies. They are a Microsoft partner and cloud solution provider. Our Client's objective is to help small, mid-sized as well as global enterprises to transform their business by using innovation in IT, adapting to the latest technologies and using IT as an enabler for business to meet business goals and continuous growth.
With focused and experienced management and a strong team of IT Infrastructure professionals, they are adding value by making IT Infrastructure a robust, agile, secure and cost-effective service to the business. As an independent IT Infrastructure company, they provide their clients with unbiased advice on how to successfully implement and manage technology to complement their business requirements.
- Working closely with other engineers and administrators
- Learning intimate knowledge of how best to customize the services available on various cloud platforms to help us become more secure and efficient.
- Assessing client requirements and coming up with costing for the sales team
- Planning and designing client infrastructure on Microsoft Azure and AWS
- Setting up alerts and monitor the health of cloud resources
- Handling the day-to-day management of clients’ cloud-based solutions Implementing security and protecting Identities
- Diagnosing and troubleshooting technical issues relating to Microsoft Azure and AWS
- Helping customers successfully deploy and implement cloud computing solutions
- Resolving technical support tickets via telephone, chat, email and sometimes in-person
- Keeping self and team updated with new cloud services offerings from Microsoft, Amazon & Google
- Staying current with industry trends, making recommendations as needed to help the company excel
What you need to have:
- Experience in cloud-based tech
- This position requires excellent written and verbal communication skills and negotiation
- Should have working knowledge of Microsoft Azure Calculator and AWS Calculator
- A clear understanding of core Cloud Computing services
- Knowledge of various computer services on Microsoft Azure and AWS
- Knowledge of various storage services on Microsoft Azure and AWS
- Knowledge of log collecting services available with Microsoft Azure and AWS
- Experience of working with popular operating systems such as Linux & Windows
- Experience of computer networks
- Experience of computer technologies like Active Directory, network protocols & subnetting
- Experience in automating day to day tasks using PowerShell scripting
- Confidence in own abilities
- Knowledgeable within this subject area and a thought leader
- Fast assimilator of information
- Imaginative problem solver
- Structured organizer
- Strong relationship building skills
- Strong analytical & numeracy skills
- Ability to use initiative and work under pressure, prioritizing to meet deadlines
- Driven, leading on initiatives, being committed to the role, and delivering on objectives and deadlines
- Service Orientation, demonstrable commitment to customer service
We are looking for a self motivated and goal oriented candidate to lead in architecting, developing, deploying, and maintaining first class, highly scalable, highly available SaaS platforms.
This is a very hands-on role. You will have a significant impact on Wenable's success.
Technical Requirements:
8+ years SaaS and Cloud Architecture and Development with frameworks such as:
- AWS, GoogleCloud, Azure, and/or other
- Kafka, RabbitMQ, Redis, MongoDB, Cassandra, ElasticSearch
- Docker, Kubernetes, Helm, Terraform, Mesos, VMs, and/or similar orchestration, scaling, and deployment frameworks
- ProtoBufs, JSON modeling
- CI/CD utilities like Jenkins, CircleCi, etc..
- Log aggregation systems like Graylog or ELK
- Additional development tools typically used in orchestration and automation like Python, Shell, etc...
- Strong security best practices background
- Strong software development a plus
Leadership Requirements:
- Strong written and verbal skills. This role will entail significant coordination both internally and externally.
- Ability to lead projects of blended teams, on/offshore, of various sizes.
- Ability to report to executive and leadership teams.
- Must be data driven, and objective/goal oriented.
- Works independently without any supervision
- Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
- Experience in deploying highly complex, distributed transaction processing systems.
- Stay abreast with new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business.
- As a component owner, where the component impacts across multiple platforms (5-10-member team), work with customers to obtain their requirements and deliver the end-to-end project.
Required Experience, Skills, and Qualifications
- 5+ years of experience as a DevOps Engineer. Experience with the Golang cycle is a plus
- At least one End to End CI/CD Implementation experience
- Excellent Problem Solving and Debugging skills in DevOps area· Good understanding of Containerization (Docker/Kubernetes)
- Hands-on Build/Package tool experience· Experience with AWS services Glue, Athena, Lambda, EC2, RDS, EKS/ECS, ALB, VPC, SSM, Route 53
- Experience with setting up CI/CD pipeline for Glue jobs, Athena, Lambda functions
- Experience architecting interaction with services and application deployments on AWS
- Experience with Groovy and writing Jenkinsfile
- Experience with repository management, code scanning/linting, secure scanning tools
- Experience with deployments and application configuration on Kubernetes
- Experience with microservice orchestration tools (e.g. Kubernetes, Openshift, HashiCorp Nomad)
- Experience with time-series and document databases (e.g. Elasticsearch, InfluxDB, Prometheus)
- Experience with message buses (e.g. Apache Kafka, NATS)
- Experience with key-value stores and service discovery mechanisms (e.g. Redis, HashiCorp Consul, etc)
- 5+ years hands-on experience with designing, deploying and managing core AWS services and infrastructure
- Proficiency in scripting using Bash, Python, Ruby, Groovy, or similar languages
- Experience in source control management, specifically with Git
- Hands-on experience in Unix/Linux and bash scripting
- Experience building, managing Helm-based build and release CI-CD pipelines for Kubernetes platforms (EKS, Openshift, GKE)
- Strong experience with orchestration and config management tools such as Terraform, Ansible or Cloudformation
- Ability to debug, analyze issues leveraging tools like App Dynamics, New Relic and Sumologic
- Knowledge of Agile Methodologies and principles
- Good writing and documentation skills
- Strong collaborator with the ability to work well with core teammates and our colleagues across STS

