Cutshort logo
Cargill Business Services logo
Kafka Administrator
Cargill Business Services's logo

Kafka Administrator

Paramjit Kaur's profile picture
Posted by Paramjit Kaur
2 - 6 yrs
Best in industry
Bengaluru (Bangalore)
Skills
Apache Kafka
Kerberos
Zookeeper
Terraform
Linux administration

As a Kafka Administrator at Cargill you will work across the full set of data platform technologies spanning on-prem and SAS solutions empowering highly performant modern data centric solutions. Your work will play a critical role in enabling analytical insights and process efficiencies for Cargill’s diverse and complex business environments. You will work in a small team who shares your passion for building, configuring, and supporting platforms while sharing, learning and growing together.  


  • Develop and recommend improvements to standard and moderately complex application support processes and procedures. 
  • Review, analyze and prioritize incoming incident tickets and user requests. 
  • Perform programming, configuration, testing and deployment of fixes or updates for application version releases. 
  • Implement security processes to protect data integrity and ensure regulatory compliance. 
  • Keep an open channel of communication with users and respond to standard and moderately complex application support requests and needs. 


MINIMUM QUALIFICATIONS

  • 2-4 year of minimum experience
  • Knowledge of Kafka cluster management, alerting/monitoring, and performance tuning
  • Full ecosystem Kafka administration (kafka, zookeeper, kafka-rest, connect)
  • Experience implementing Kerberos security
  • Preferred:
  • Experience in Linux system administration
  • Authentication plugin experience such as basic, SSL, and Kerberos
  • Production incident support including root cause analysis
  • AWS EC2
  • Terraform
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Cargill Business Services

Founded :
1900
Type :
Products & Services
Size :
100-1000
Stage :
Profitable
About
Official Twitter account for Cargill Inc.
Read more
Connect with the team
Profile picture
Ravi Eranki
Profile picture
Ravikanth Eranki
Company social profiles
linkedintwitterfacebook

Similar jobs

Remote only
6 - 10 yrs
₹30L - ₹60L / yr
DevOps
Terraform
skill iconDocker
skill iconAmazon Web Services (AWS)
skill iconKubernetes
+1 more
ountries: India, Sri Lanka, Vietnam, Philipines, Malaysia, Indonesia
Multiplier enables companies to employ anyone, anywhere in a few clicks. Our
SaaS platform combines the multi-local complexities of hiring & paying employees
anywhere in the world, and automates everything. We are passionate about
creating a world where people can get a job they love, without having to leave the
people they love.
We are an early stage start up with a "Day one" attitude and we are building a
team that will make Multiplier the market leader in this space. Every day is an
exciting one at Multiplier right now because we are figuring out a real problem in
the market and building a first-of-its-kind product around it. We are looking for
smart and talented people who will add on to our collective energy and share the
same excitement in making Multiplier a big deal. We are headquartered in
Singapore, but our team is remote.
What will I be doing? 👩💻👨💻
Owning and managing our cloud infrastructure on AWS.
Working as part of product development from inception to launch, and own
deployment pipelines through to site reliability.
Ensuring a high availability production site with proper alerting, monitoring and
security in place.
Creating an efficient environment for product development teams to build, test
and deploy features quickly by providing multiple environments for testing and
staging.
Use infrastructure as code and the best of methods and tools in DevOps to
innovate and keep improving.
Create an automation culture and add automation to wherever it is needed..

DevOps Engineer Remote) 2
What do I need? 🤓
4 years of industry experience in a similar DevOps role, preferably as part of
a SaaS
product team. You can demonstrate the significant impact your work has had
on the product and/or the team.
Deep knowledge in AWS and the services available. 2 years of experience in
building complex architecture on cloud infrastructure.
Exceptional understanding of containerisation technologies and Docker. Have
had hands on experience with Kubernetes, AWS ECS and AWS EKS.
Experience with Terraform or any other infrastructure as code solutions.
Able to comfortably use at least one high level programming languages such
as Java, Javascript or Python. Hands on experiences of scripting in bash,
groovy and others.
Good understanding of security in web technologies and cloud infrastructure.
Work with and solve problems of very complex nature and enjoy doing it.
Willingness to quickly learn and use new technologies or frameworks.
Clear and responsive communication.
Read more
Saas product startup all-in-one data stack for organizations
Saas product startup all-in-one data stack for organizations
Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
1 - 3 yrs
₹15L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
Terraform
Responsibilities :
● Explore
○ As a devops engineer, you will have multiple ways, tools & technologies to solve
a particular problem. We want you to take things in your own hands and figure
out the best way to solve it.
● PDCT
○ Plan, design, code & write test cases for problems you are solving
● Tuning
○ Help to tune performance and ensure high availability of infrastructure, including
reviewing system and application logs
● Security
○ Work on code-level application security
● Deploy
○ Deploy, manage and operate scalable, highly available, and fault-tolerant
systems in client environments.
Technologies (4 out of 5 are required) :
● Terraform*
● Docker*
● Kubernetes*
● Bash Scripting
● SQL
(* marked are a must)
The challenges are great (as are the rewards). If you are looking to take these DevOps
challenges head on & wish to learn a great deal out of it and contribute to the company along
the way, this is the role for you.
Ready?
If developing impactful product for a initial stage startup sounds appealing to you, let’s
have a conversation. (Confidential, of course)
Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
Pixuate
at Pixuate
1 recruiter
Ramya Kukkaje
Posted by Ramya Kukkaje
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Oracle Cloud
+5 more
  • Pixuate is a deep-tech AI start-up enabling businesses make smarter decisions with our edge-based video analytics platform and offer innovative solutions across traffic management, industrial digital transformation, and smart surveillance. We aim to serve enterprises globally as a preferred partner for digitization of visual information.

    Job Description

    We at Pixuate are looking for highly motivated and talented Senior DevOps Engineers to support building the next generation, innovative, deep-tech AI based products. If you are someone who has a passion for building a great software, has analytical mindset and enjoys solving complex problems, thrives in a challenging environment, self-driven, constantly exploring & learning new technologies, have ability to succeed on one’s own merits and fast-track your career growth we would love to talk!

    What do we expect from this role?

    • This role’s key area of focus is to co-ordinate and manage the product from development through deployment, working with rest of the engineering team to ensure smooth functioning.
    • Work closely with the Head of Engineering in building out the infrastructure required to deploy, monitor and scale the services and systems.
    • Act as the technical expert, innovator, and strategic thought leader within the Cloud Native Development, DevOps and CI/CD pipeline technology engineering discipline.
    • Should be able to understand how technology works and how various structures fall in place, with a high-level understanding of working with various operating systems and their implications.
    • Troubleshoots basic software or DevOps stack issues

    You would be great at this job, if you have below mentioned competencies

    • Tech /M.Tech/MCA/ BSc / MSc/ BCA preferably in Computer Science
    • 5+ years of relevant work experience
    • https://www.edureka.co/blog/devops-skills#knowledge">Knowledge on Various DevOps Tools and Technologies
    • Should have worked on tools like Docker, Kubernetes, Ansible in a production environment for data intensive systems.
    • Experience in developing https://www.edureka.co/blog/continuous-delivery/">Continuous Integration/ Continuous Delivery pipelines (CI/ CD) preferably using Jenkins, scripting (Shell / Python) and https://www.edureka.co/blog/what-is-git/">Git and Git workflows
    • Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.
    • Experience with the design and implementation of big data backup/recovery solutions.
    • Strong Linux fundamentals and scripting; experience as Linux Admin is good to have.
    • Working knowledge in Python is a plus
    • Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture is a plus
    • Strong interpersonal and communication skills
    • Proven ability to complete projects according to outlined scope and timeline
    • Willingness to travel within India and internationally whenever required
    • Demonstrated leadership qualities in past roles

    More about Pixuate:

    Pixuate, owned by Cocoslabs Innovative Solutions Pvt. Ltd., is a leading AI startup building the most advanced Edge-based video analytics products. We are recognized for our cutting-edge R&D in deep learning, computer vision and AI and we are solving some of the most challenging problems faced by enterprises. Pixuate’s plug-and-play platform revolutionizes monitoring, compliance to safety, and efficiency improvement for Industries, Banks & Enterprises by providing actionable real-time insights leveraging CCTV cameras.

    We have enabled our customers such as Hindustan Unilever, Godrej, Secuira, L&T, Bigbasket, Microlabs, Karnataka Bank etc and rapidly expanding our business to cater to the needs of Manufacturing & Logistics, Oil and Gas sector.

    Rewards & Recognitions:

    Why join us?

    You will get an opportunity to work with the founders and be part of 0 to 1 journey& get coached and guided. You will also get an opportunity to excel your skills by being innovative and contributing to the area of your personal interest. Our culture encourages innovation, freedom and rewards high performers with faster growth and recognition.   

    Where to find us?

    Website: http://pixuate.com/">http://pixuate.com/                            

    Linked in: https://www.linkedin.com/company/pixuate-ai     

     

    Place of Work:

    Work from Office – Bengaluru
Read more
Srijan Technologies
at Srijan Technologies
6 recruiters
Adyasha Satpathy
Posted by Adyasha Satpathy
Remote only
4 - 12 yrs
₹18L - ₹25L / yr
skill iconKubernetes
skill iconDocker
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more


Srijan Technologies is hiring for the DevOps Lead position- Cloud Team with a permanent WFH option.

Immediate Joiners or candidates with 30 days notice period are preferred.

Requirements:-

  • Minimum 4-6 Years experience in DevOps Release Engineering. 
  • Expert-level knowledge of Git. 
  • Must have great command over Kubernetes
  • Certified Kubernetes Administrator
  • Expert-level knowledge of Shell Scripting & Jenkins so as to maintain continuous integration/deployment infrastructure. 
  • Expert level of knowledge in Docker. 
  • Expert level of Knowledge in configuration management and provisioning toolchain; At least one of Ansible / Chef / Puppet. 
  • Basic level of web development experience and setup: Apache, Nginx, MySQL 
  • Basic level of familiarity with Agile/Scrum process and JIRA. 
  • Expert level of Knowledge in AWS Cloud Services.
Read more
Hammoq
at Hammoq
1 recruiter
Nikitha Muthuswamy
Posted by Nikitha Muthuswamy
Remote, Indore
5 - 15 yrs
₹12L - ₹25L / yr
DevOps
skill iconDocker
skill iconJenkins
skill iconKubernetes
Linux administration
+4 more

Hammoq is an exponentially growing Startup in US and UK. 

Design and implement secure automation solutions for development, testing, and production environments


  • Build and deploy automation, monitoring, and analysis solutions

  • Manage our continuous integration and delivery pipeline to maximize efficiency

  • Implement industry best practices for system hardening and configuration management

  • Secure, scale, and manage Linux virtual environments

  • Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring

  • Continuously evaluate existing systems with industry standards, and make recommendations for improvement

Desired Skills & Experiences

  • Bachelor’s or Master's degree in Computer Science, Engineering, or related field

  • Understanding of system administration in Linux environments

  • Strong knowledge of configuration management tools

  • Familiarity with continuous integration tools such as Jenkins, Travis CI, Circle CI

  • Proficiency in scripting languages including Bash, Python, and JavaScript

  • Strong communication and documentation skills

  • An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail

  • Excellent judgment, analytical thinking, and problem-solving skills

  • Full understanding of software development lifecycle best practices

  • Self-motivated individual that possesses excellent time management and organizational skills

    In PM's Words 
    Bash scripting, Containerd(or docker), Linux Operating system basics, kubernetes, git, Jenkins ( or any pipeline management), GCP ( or idea on any cloud technology)

    Linux is major..most of the people are coming from Windows.. we need Linux.. and if windows is also there it will be added advantage

    There is utmost certainilty that you will be working with an amazing team...

Read more
Hiring for one MNC client for Gurgaon location
Hiring for one MNC client for Gurgaon location
Agency job
via Natalie Consultants by Swati Bansal
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹6L - ₹14L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconDocker
Terraform

Exposure to development and implementation practices in a modern systems environment together with exposure to working in a project team particularly with reference to industry methodologies, e.g. Agile, continuous delivery, etc

  • At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
  • Strong understanding of how to secure AWS environments and meet compliance requirements
  • Experience using DevOps methodology and Infrastructure as Code
  • Automation / CI/CD tools – Bitbucket Pipelines, Jenkins
  • Infrastructure as code – Terraform, Cloudformation, etc
  • Strong experience deploying and managing infrastructure with Terraform
  • Automated provisioning and configuration management – Ansible, Chef, Puppet
  • Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
  • Improve CI/CD processes, support software builds and CI/CD of the development departments
  • Develop, maintain, and optimize automated deployment code for development, test, staging and production environments
Read more
They provide both wholesale and retail funding. PM1
They provide both wholesale and retail funding. PM1
Agency job
via Multi Recruit by Sapna Deb
Bengaluru (Bangalore)
8 - 10 yrs
₹40L - ₹50L / yr
DevOps
skill iconDocker
skill iconAmazon Web Services (AWS)
CI/CD
Ansible
+5 more
  • 3+ years experience leading a team of DevOps engineers
  • 8+ years experience managing DevOps for large engineering teams developing cloud-native software
  • Strong in networking concepts
  • In-depth knowledge of AWS and cloud architectures/services.
  • Experience within the container and container orchestration space (Docker, Kubernetes)
  • Passion for CI/CD pipeline using tools such as Jenkins etc.
  • Familiarity with config management tools like Ansible Terraform etc
  • Proven record of measuring and improving DevOps metrics
  • Familiarity with observability tools and experience setting them up
  • Passion for building tools and productizing services that empower development teams.
  • Excellent knowledge of Linux command-line tools and ability to write bash scripts.
  • Strong in Unix / Linux administration and management,


KEY ROLES/RESPONSIBILITIES:

  • Own and manage the entire cloud infrastructure
  • Create the entire CI/CD pipeline to build and release
  • Explore new technologies and tools and recommend those that best fit the team and organization
  • Own and manage the site reliability
  • Strong decision-making skills and metric-driven approach
  • Mentor and coach other team members
Read more
YourHRfolks
at YourHRfolks
6 recruiters
Pranit Visiyait
Posted by Pranit Visiyait
Remote, Jaipur
3 - 8 yrs
₹6L - ₹16L / yr
DevOps
skill iconDocker
skill iconJenkins
skill iconKubernetes
Terraform
+6 more

Job Location: Jaipur

Experience Required: Minimum 3 years

About the role:

As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here. 

Responsibilities:

  • Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
  • Ensuring availability, performance, security, and scalability of AWS production systems
  • Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment. 
  • Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
  • Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
  • 24x7 in shifts on call for Level 2 and higher escalations
  • Respond to incidents and write blameless RCA’s/postmortems
  • Implement and practice proper security controls and processes
  • Providing recommendations for architecture and process improvements.
  • Definition and deployment of systems for metrics, logging, and monitoring on platform.

Must have:  

  • Minimum 3 Years of Experience in DevOps.  
  • BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
  • Strong inter-personal skills.
  • Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
  • Must have experience in Docker, Kubernetes, Amazon ECS or  Mesos
  • Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
  • Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
  • Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.  
  • In-depth knowledge of the Linux operating system and administration. 
  • Production experience with a major cloud provider such Amazon AWS.
  • Knowledge of web server technologies such as Nginx or Apache. 
  • Knowledge of Redis, Memcache, or one of the many in-memory data stores.
  • Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5. 
  • Comfortable with large-scale, highly-available distributed systems.

Good to have:  

  • Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
  • Production experience with Hashicorp products such as Vault or Consul
  • Expertise in designing, analyzing troubleshooting large-scale distributed systems.
  • Experience in an PCI environment
  • Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
  • Experience maintaining and scaling database applications
  • Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc. 
  • Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
  • Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs. 
  • Experience in Kafka, RabbitMQ or any messaging bus.
Read more
Shuttl
at Shuttl
8 recruiters
Tanika Monga
Posted by Tanika Monga
NCR (Delhi | Gurgaon | Noida)
3 - 6 yrs
₹10L - ₹21L / yr
Terraform
skill iconKubernetes
Ansible
WHAT WILL I DO? You will work as a Site Reliability Engineer responsible for the availability, performance, monitoring, and incident response, among other things, of the platforms and services used and owned by Shuttl. The SRE Team works alongside the Engineering team and owns every aspect of service availability as well as disaster recovery and business continuity plans. You will work with other Site Reliability Engineers and report to the Lead of Site Reliability Engineering Team. HOW DO WE WORK? Our engineering process is a five step process which consists of phases for planning, developing, testing & profiling, releasing and monitoring. The planning phase consists of documenting of the feature/task to be done followed by various discussions. These discussions cover product, delivery estimates, release plan, monitoring plan, test plans, architecture, code design, technology choices and best practice adoption. The development and testing phase coexist and involve writing code, unit tests, performance tests, profiling, stress testing, code reviews and QA testing. This phase is punctuated with daily scrums and standups. The release phase is largely about managing and communicating the release to customers and internal stakeholders and activating features. The last phase is the monitoring phase where relevant metrics and exceptions are tracked and any critical refinement for the delivered feature is undertaken. This phase culminates with a retrospective. SREs get involved in this process as early as possible to provide general guidance, recommendations and help with designing the application to be in compliance with community standards such as CNCF and 12 Factor. SRE involvement and influence tends to increase during mid to final stages of development where the application is primed for beta evaluation and all the tooling and instrumentation is finalized. WHAT SKILLS SHOULD I HAVE? For this role we expect you to have 3+ years of experience working as a DevOps Engineer or SRE. You should have a good grasp of Unix like systems, access control, networking nuances, process isolation by the means of kernel provided features, distributed applications and algorithms, job schedulers and secret management among other things. At Shuttl we are a big proponent of Immutable infrastructure. All our infrastructure is hosted with Amazon Web Services and we use Hashicorp's Terraform to manage the infrastructure as code. A good handle on AWS and Terraform is therefore a definitive plus. Since SREs are expected to write a lot of code, you are also expected to be skillful in a programming language, preferably Python or Go.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos