27+ Splunk Jobs in India
Apply to 27+ Splunk Jobs on CutShort.io. Find your next job, effortlessly. Browse Splunk Jobs and apply today!

We are hiring a Java Production Support Engineer with hands-on experience in Java, Spring Boot, and Splunk. You’ll be responsible for ensuring smooth functioning of our production systems by proactively monitoring, troubleshooting, and resolving issues in real time.
Key Responsibilities
- Provide L2/L3 support for Java/Spring Boot-based applications in production.
- Monitor logs and system health using Splunk and other observability tools.
- Troubleshoot performance, latency, and availability issues.
- Perform root cause analysis (RCA) and work with dev teams for permanent fixes.
- Support incident and change management processes.
- Create and maintain runbooks and support documentation.
Must-Have Skills
- Strong knowledge of Java, Spring Boot, and REST APIs.
- Proficient in using Splunk for log analysis and monitoring.
- Good understanding of relational databases (SQL) and Linux commands.
- Experience with CI/CD tools (Jenkins, Git) and basic scripting (Bash, Python).
- Analytical thinking and excellent problem-solving skills.
General Description:Owns all technical aspects of software development for assigned applications
Participates in the design and development of systems & application programs
Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation
Required Skills:
In depth experience configuring and administering EKS clusters in AWS.
In depth experience in configuring **Splunk SaaS** in AWS environments especially in **EKS**
In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**
In depth knowledge of observability concepts and strong troubleshooting experience.
Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.
Experience in **Terraform** and Infrastructure as code.
Experience in **Helm**Strong scripting skills in Shell and/or python.
Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
Must have a good understanding of cloud concepts (Storage /compute/network).
Experience collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.
Experience with Git and GitHub.Experience with code build and deployment using GitHub actions, and Artifact Registry.
Proficient in developing and maintaining technical documentation, ADRs, and runbooks.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
Our Professional Services team seeks a Cloud Engineer with a focus on Public Clouds for professional services engagements. In this role, the candidate will be ensuring the success of our engagements by providing deployment, configuration, and operationalization of Cloud infrastructure as well as various other cloud technologies such as On-Prem, Openshift, and Hybrid Environments.
A successful candidate for this position does require a good understanding of the Public Cloud systems (AWS, Azure) as well as a working knowledge of systems technologies, common enterprise software (Linux, Windows, Active Directory), Cloud Technologies (Kubernetes, VMware ESXi ), and a good understand of cloud automation (Ansible, CDK, Terraform, CloudFormation). The ideal candidate has industry experience and is confident working in a cross-functional team environment that is global in reach.
Key Roles & Responsibilities:
- Public Cloud: Lead Public Cloud deployments for our Cloud Engineering customers including setup, automation, configuration, documentation and troubleshooting. Redhat Openshift on AWS/Azure experience is preferred.
- Automation: Develop and maintain automated testing systems to ensure uniform and reproducible deployments for common infrastructure elements using tools like Ansible, Terraform, and CDK.
- Support: In this role the candidate may need to support the environment as part of the engagement through hand-off. Requisite knowledge of operations will be required
- Documentation: The role can require significant documentation of the deployment and steps to maintain the system. The Cloud Engineer will also be responsible for all required documentation needed as required for customer hand-off.
- Customer Skills: This position is customer facing and effective communication and customer service is essential.
Basic Qualifications:
- Bachelor's or Master's degree in computer programming or quality assurance.
- 5-8 years as an IT Engineer or DevOps Engineer with automation skills and AWS or Azure Expierence. Preferably in Professional Services.
- Proficiency in enterprise tools (Grafana, Splunk etc.), software (Windows, Linux, Databases, Active Directory, VMware ESXi, Kubernetes) and techniques (Knowledge of Best Practices).
- Demonstratable Proficiency with Automation Packages (Ansible, Git, CDK, Terraform, Cloudformation, Python)
Preferred Qualifications
- Exceptional communication and interpersonal skills.
- Strong ownership abilities, attention to detail.
Dynatrace Infrastructure Engineer
Job Summary:
We are seeking an experienced Dynatrace Engineer to install, configure, and maintain Dynatrace environments for monitoring application performance and user experiences. The ideal candidate should have hands-on expertise in Application Performance Monitoring (APM) tools and the ability to troubleshoot performance issues effectively.
Experience Requirements:
- Total Experience: 3 to 7 years
- Relevant Experience: 3 to 5 years in Dynatrace administration, setup, configuration, and maintenance
Key Responsibilities:
- Install, configure, and manage Dynatrace for application performance monitoring.
- Deploy and optimize Dynatrace environments for efficient monitoring and alerting.
- Analyze application performance, troubleshoot performance issues, and ensure seamless user experience.
- Work with Application Performance Monitoring (APM) tools to identify and resolve performance bottlenecks.
- Collaborate with development and infrastructure teams to integrate Dynatrace into existing systems.
- Ensure high availability, scalability, and compliance with organizational monitoring standards.
Key Requirements:
- Hands-on experience in Dynatrace setup, configuration, and maintenance.
- Strong understanding of application performance monitoring and optimization.
- Expertise in troubleshooting application performance issues.
- Familiarity with cloud environments and virtualization platforms.
- Ability to work in a dynamic environment and support cross-functional teams.
Preferred Qualifications:
- Experience with other APM tools (e.g., New Relic, AppDynamics, Splunk, etc.).
- Knowledge of cloud platforms (AWS, Azure, GCP) and containerized environments.
- Strong scripting skills (PowerShell, Python, Bash) for automation and monitoring enhancement.
About SumoLogic
At Sumo Logic, we specialize in empowering the digital workforce through our advanced SaaS analytics platform, focusing on reliable and secure cloud-native applications.
Step into the heart of innovation with our dynamic and collaborative support team! As a Technical Support Engineer at SUMO Logic, you will play a crucial role in empowering our customers to harness the full potential of our cutting-edge cloud technology. Your expertise in logging, SIEM, and cloud solutions will be vital in guiding our customers toward achieving unparalleled business success.
You will be at the forefront of solving complex challenges and driving technological advancements by providing exceptional technical support and insights. Join us and transform challenges into opportunities, enhancing customer satisfaction and shaping the future of technology.
At SUMO Logic, our technical support team is recognized as one of our crown jewels, featuring some of the most technically adept individuals in the industry. Work here is challenging and rewarding, propelling you forward in a fast-paced and dynamic environment.
What You Will Do
As a Technical Support Engineer, your role will involve:
- Working with customer support tickets in our Salesforce Service Cloud ticketing system
- Providing enterprise-level support to our customers and partners, focusing on technical issues related to logging, metrics, SIEM, and cloud technologies.
- Engaging directly with customers to quickly assess, troubleshoot, and resolve issues from simple to complex, ensuring effective communication and setting clear expectations.
- Document enhancements or defects in our products and advise on best practices for implementing and using the Sumo Logic service.
- Offering valuable feedback to our engineering, product management, and CS leadership teams based on customer interactions and experiences.
- Developing and refining processes, procedures, and tools for the support team to optimize customer interactions and stakeholder interactions.
- Producing Knowledge Base (KB) articles for common issues lacking a current KB or revising existing KB articles for the ticketing system KB and public community KB.
What You Will Bring With You
- Extensive SaaS Experience: Proven track record in a technical role managing multiple customer accounts, preferably with a background in DevOps Engineering, SOC analysis, or similar technical positions.
- Customer-Centric Approach: Passion for customer satisfaction and problem-solving, with the ability to manage relationships across various levels, from technical practitioners to executives.
- Communication Excellence: Possesses professional and transparent communication skills. Able to deliver technical context to various stakeholder levels using remote (e.g., Zoom) or written media.
- Strategic Problem-Solving: Ability to navigate ambiguity, proactively seek necessary support, and manage multiple accounts with attention to detail.
- Situation Management: Capable of assessing client scenarios, documenting issue timelines, and working with executive management and product engineering towards root cause analysis and final assessments.
- Desire to Learn: Thrive in a fast-paced, high-growth, rapidly changing environment with the ability to work with and deeply understand a new product or service. Utilize Sumo-offered LinkedIn learning and other resources to increase technical knowledge and sharpen soft skills.
- Ability to support multiple international time zones
Desired Technical Qualifications
- Monitoring Platform Experience: Proficiency in Sumo Logic or similar platforms (e.g., Splunk, Data Dog, Elastic, New Relic, Appdynamics, VMWare Tanzu).
- In-depth Knowledge of Logging Systems: Proficiency in systems like Windows Event Viewer, Syslog, R Syslog, & Syslog-ng.
- Expertise in SIEM and Cloud Technologies: Strong understanding of cloud services (AWS, GCP, Azure) and security information and event management (SIEM) principles.
- Advanced Technical Skills: Experience with system administration, SSH management, and basic scripting and programming (Java, C++, Python, PowerShell, Bash, etc.).
- Query Language Proficiency: SQL or similar query language skills.
- Kubernetes and Docker Proficiency: Extensive experience in setup, configuration, troubleshooting, tuning, and infrastructure management.
- Network Savvy: Solid knowledge of TCP/IP, ping, traceroute, Netcat, TCP dump, Wireshark, nslookup, etc.
- OSS skills in Otel, Prometheus, and Falco are a plus
- Sumo Logic experience is a big plus but not required
Travel Requirements
Minimal, but generally once a quarter to once a year (1-5%) for corporate training and mandatory meetings.
Education
Bachelor's or Master's degree in Engineering, Computer Science, or a similar field, or equivalent work experience.
Join us at Sumo Logic and contribute to our mission of revolutionizing technical support in the digital business world, with a particular focus on logging, SIEM, and cloud technologies.
About Us
Sumo Logic, Inc. empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its Sumo Logic SaaS Analytics Log Platform, which helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com.
Sumo Logic Privacy Policy. Employees will be responsible for complying with applicable federal privacy laws and regulations, as well as organizational policies related to data protection.

We are seeking a Production Support Engineer to join our team.
Responsibilites:
- Be the first line of defense for production and test environment issues.
- Work collaboratively with the team to identify, manage, and resolve ongoing incidents.
- Troubleshoot and connect with appropriate teams to effectively triage issues impacting test and production environments.
- Understand system architecture, upstream, and downstream dependencies to enable effective participation in triage and restoration activities.
- Perform systems monitoring of applications within the IRS domain after service restoration and post patching, maintenance, and upgrades.
- Create necessary service tickets and ensure tickets are routed to the appropriate technical teams.
- Provide weekend support for various activities including patching, release deployments, security updates, and 3rd party updates.
- Keep up with info alerts, patching alerts, and delivery partners' activities.
- Update stakeholders to plan for upcoming maintenance as well as alert them about service issues and restoration.
- Manage and communicate about upcoming maintenance in the test environment on a daily basis.
- Liaise with various stakeholders to gain approval for alert communications, including confirmation before an all-clear communication.
- Work closely with testing and development teams to prepare for infrastructure updates and release readiness.
- Submit Application Redirects tickets for planned maintenance after gaining approval from management.
- Participate in analysis and improvement of system performance.
- Host daily operational standup.
- Provide additional support to existing production support procedures and process improvements.
- Provide regular status reports to management on application status and other metrics.
- Collaborate with management to improve and customize reports related to production support.
- Plan and manage support for incident management tools and processes.
Requirements:
- Bachelor's Degree in computer science, engineering, or related field.
- AWS Cloud certification.
- 3+ years of relevant IT work experience with cloud experience.
- Knowledge of Java and microservice development and deployments.
- Understanding of the business processes behind applications.
- Strong analytical, problem-solving, negotiation, task and project management, and organizational skills.
- Strong oral and written communication skills, including process documentation.
- Proficiency in Microsoft Office applications (Word, PowerPoint, Excel, and Project).
- Proficiency in knowledge of computer systems, databases, and SharePoint.
- Knowledge of Splunk and AppDynamics.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link: https://zrec.in/gQWFK?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com
Devops Engineer(Permanent)
Experience: 8 to 12 yrs
Location: Remote for 2-3 months (Any Mastek Location- Chennai/Mumbai/Pune/Noida/Gurgaon/Ahmedabad/Bangalore)
Max Salary = 28 LPA (including 10% variable)
Notice Period: Immediate/ max 10days
Mandatory Skills: Either Splunk/Datadog, Gitlab, Retail Domain
· Bachelor’s degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience.
· 10+ years’ experience in software development
· 8+ years of experience in DevOps
· Mandatory Skills: Either Splunk/Datadog,Gitalb,EKS,Retail domain experience
· Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes
· Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experience transitioning an organization through its adoption
· Demonstrable experience with configuration, orchestration, and automation tools such as Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration
· Strong working knowledge of enterprise platforms, tools and principles including Web Services, Load Balancers, Shell Scripting, Authentication, IT Security, and Performance Tuning
· Demonstrated understanding of system resiliency, redundancy, failovers, and disaster recovery
· Experience working with a variety of vendor APIs including cloud, physical and logical infrastructure devices
· Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS)
· Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc.
· Manage and maintain standards for Devops tools used by the team
- Dynatrace Expertise: Lead the implementation, configuration, and optimization of Dynatrace monitoring solutions across diverse environments, ensuring maximum efficiency and effectiveness.
- Cloud Integration: Utilize expertise in AWS and Azure to seamlessly integrate Dynatrace monitoring into cloud-based architectures, leveraging PaaS services and IAM roles for efficient monitoring and management.
- Application and Infrastructure Architecture: Design and architect both application and infrastructure landscapes, considering factors like Oracle, SQL Server, Shareplex, Commvault, Windows, Linux, Solaris, SNMP polling, and SNMP traps.
- Cross-Platform Integration: Integrate Dynatrace with various products such as Splunk, APIM, and VMWare to provide comprehensive monitoring and analysis capabilities.
- Inter-Account Integration: Develop and implement integration strategies for seamless communication and monitoring across multiple AWS accounts, leveraging Terraform and IAM roles.
- Experience working with On-premise Application and Infrastructure
- Experience with AWS & Azure and Cloud Certified.
- Dynatrace Experience & Certification
Position Title: Manager – Security Operations Organization /Function: Manager is responsible for day to day operational and project delivery for a set of customers Relevant Experience: 10+ years of experience in security area and at least 2 years as Security manager Educational Qualification: BE/B.Tech/ME/M.Tech/Graduate/Master in any stream with excellent academic record
Must-have Skills: • Must know common security policy frameworks and possess knowledge of how security programs are run at mid to large scale companies • Must have managed a team to deliver “Managed Security Service” or “Security Operations Center” • Prior working Background in either SIEM tools (Splunk, ArcSight, QRadar, DNIF etc.) or Vulnerability assessment and Management tool (Qualys/Rapid7) and process • Has broader context and understanding of managed security services • Must have service mindset and empathy. Must deal with a level of ambiguity, chaos and apparent stubbornness from customers, and manage around it by thinking through the issue or request from the customer’s perspective to drive to a reasonable conclusion • Must have prior experience on Project Management • Must have prior experience of onsite-offshore delivery model and should have directly worked with US/European customers or colleagues • Must have ITIL process knowledge



UKG's engineering teams wants to add 4 staff aug consultants (2 in US and 2 in Noida) to support their identity platform infrastructure, deployments to prod, escalations/KTLO, and debugging needs. They'll need to partner with internal dev teams who consume our Identity platform around any issues or new integrations needed. RCA/enhancing our observability.
They need to have the below qualifications:
• Linux (Ubuntu) - Understanding of Linux OS and has experience with troubleshooting complex issues that can be due to infrastructure related around disk/iops, network latency, jvm/gc related, as well as application related defects.
Python/similar scripting language to help automate manual tasks/remediation efforts.
• Ansible
• Identity platforms/technology -
SAML2/Oauth2/Directory server LDAP#python queryquerylanguage/OpenDJ/OpenAM/Auth0/Okta/SSO
• Java - understanding of Java tuning/best practices for high volume applications at scale
• Nginx
• Grafana • PagerDuty
• Postman/API • Kibana/Splunk
Dynatrace
GCP preferred. Azure is good as well.
This role is for Work from the office.
Job Description
Roles & Responsibilities
- Work across the entire landscape that spans network, compute, storage, databases, applications, and business domain
- Use the Big Data and AI-driven features of vuSmartMaps to provide solutions that will enable customers to improve the end-user experience for their applications
- Create detailed designs, solutions and validate with internal engineering and customer teams, and establish a good network of relationships with customers and experts
- Understand the application architecture and transaction-level workflow to identify touchpoints and metrics to be monitored and analyzed
- Analytics and analysis of data and provide insights and recommendations
- Constantly stay ahead in communicating with customers. Manage planning and execution of platform implementation at customer sites.
- Work with the product team in developing new features, identifying solution gaps, etc.
- Interest and aptitude in learning new technologies - Big Data, no SQL databases, Elastic Search, Mongo DB, DevOps.
Skills & Experience
- At least 2+ years of experience in IT Infrastructure Management
- Experience in working with large-scale IT infra, including applications, databases, and networks.
- Experience in working with monitoring tools, automation tools
- Hands-on experience in Linux and scripting.
- Knowledge/Experience in the following technologies will be an added plus: ElasticSearch, Kafka, Docker Containers, MongoDB, Big Data, SQL databases, ELK stack, REST APIs, web services, and JMX.
POSITION SUMMARY:
We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.
EXPERIENCE
7- 9+ Years – Software Engineer III
PRIMARY RESPONSIBILITIES:
-
Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.
-
Work closely with the customers and understand the requirements and get it done on timelines.
-
Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.
-
Work closely with the Team and complete the deliverables on-time
-
Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.
-
Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.
-
Drive automation efforts for the configuration and maintainability of the public/private Cloud.
-
Lead product selection for replacement or new technologies
-
Address user tickets in a timely manner for the covered services
-
Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.
-
Migration and consolidations of infrastructure
-
Design and implement major service and infrastructure components.
-
Research, investigate and define new areas of technology to enhance existing service or new service directions.
-
Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.
-
Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.
-
Ability to take ownership on activities and new initiatives.
-
Infra Global Support from India towards product Development teams.
-
On-call support on a rotational basis for a global turn-around time-zones
-
Vendor Management for all latest hardware and software evaluations keep the system up-to-date.
KNOWLEDGE, SKILLS AND ABILITIES:
-
Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.
-
Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations
-
IaaS- Infrastructure as a service, Metal as service, Platform service
-
Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies
-
Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)
-
DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms
-
Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment
-
Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.
-
Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect
-
Networking Skills – Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)
-
Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)
-
Experience with Red Hat Ceph storage.
-
A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies
-
SME - subject matter expert for all cutting-edge technologies
-
Data center architect professional & Storage Expert level Certified professional experience .
-
A solid understanding of high availability systems, redundant networking and multipathing solutions
-
Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.
-
A Working experience in Object – Block – File storage Technologies
-
Experience in Backup Technologies and backup administration.
-
Dell/HP/Cisco UCS server’s administration is an additional advantage.
-
Ability to quickly learn and adopt new technologies.
-
A very very story experience and exposure towards open-source flatforms.
-
A working experience on monitoring tools Zabbix, nagios , Datadog etc ..
-
A working experience on and BareMetal services and OS administration.
-
A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.
-
A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)
-
A working experience with systems engineering and Linux /Unix administration
-
A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL
-
A working experience with automation/configuration management using either Puppet, Chef or an equivalent
-
A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories
-
Experience in Build system process and Code-inspect and delivery methodologies.
-
Knowledge on creating Operational Dashboards and execution lane.
-
Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services
-
SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
-
Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.
-
Experience in handling On-call Support and hierarchy process.
-
Knowledge on scale-out and scale-in architecture.
-
Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.
-
Knowledge on Agile and Scrum principles
-
Working experience with ServiceNow
-
Knowledge sharing, transition experience and self-learning Behavioral.
Intuitive is now hiring for DevOps Consultant for full-time employment
DevOps Consultant
Work Timings: 6.30 AM IST – 3.30 PM IST ( HKT 9.00 AM – 6.00 PM)
Key Skills / Requirements:
- Mandatory
- Integrating Jenkins pipelines with Terraform and Kubernetes
- Writing Jenkins JobDSL and Jenkinsfiles using Groovy
- Automation and integration with AWS using Python and AWS CLI
- Writing ad-hoc automation scripts using Bash and Python
- Configuring container-based application deployment to Kubernetes
- Git and branching strategies
- Integration of tests in pipelines
- Build and release management
- Deployment strategies (eg. canary, blue/green)
- TLS/SSL certificate management
- Beneficial
- Service Mesh
- Deployment of Java Spring Boot based applications
- Automated credential rotation using AWS Secrets Manager
- Log management and dashboard creation using Splunk
- Application monitoring using AppDynamics
If Your profile matches the requirements share your resume at anitha.katintuitvedotcloud
Regards,
Anitha. K
TAG Specialist
The Network Engineer position provides a secure network infrastructure and resolves any network security related findings. Evaluate and analyze new technologies for potential applicability to business needs. Design, install, and support the network infrastructure capable of supporting the business processes at Citizens Energy Group. Design, test, and implement a disaster recovery process and procedures for the network infrastructure. Maintain a solid technical knowledge of computer hardware, software and applications available within the industry to support our current systems and assist in planning future directions.
Detailed Description
Essential Duties and Responsibilities:
- Responsible to provide on-call support in a 24/7 environment and remedy network outages or other interruptions.
- Maintain existing network equipment, installing new equipment and replacing faulty equipment in plant environments. Vendor and warranty management for network hardware and software.
- Design, implement, document, and test Disaster Recovery process and procedures for network infrastructure
- Manage network infrastructure backup and recovery processes
- Security and critical IOS update patch management
- Advise team for network infrastructure-related issues
- Proactive monitoring of to identify outages, service degradation, or security threats and responding accordingly
- Prepare, maintain, and adhere to procedures for logging, reporting, and statistically monitoring network data.
- Design, implement, and maintain network infrastructure to meet SLA requirements
- Design, implement, and maintain Firewall infrastructure for internal and remote use.
- Design, implement, and maintain network management platform for proactive monitoring and management of network infrastructure
- Designing, maintaining, and effectively operating Local Area Networks (LANs) and Wide Area Networks (WANs), Wireless, Internet Connectivity, and Network Security
- Prepare and ensure accuracy of documentation, procedure manuals, and help sheets for network installations, including data, voice, and video systems.
- Assist in development of business continuity and disaster-recovery plans, and maintain current knowledge of the plan. Respond to emergency network outages in accordance with business continuity and disaster-recovery plans
- Performs other duties as assigned
- This position will require an onsite presence and 80% local travel, as it will be supporting our utilities in the Greater Indianapolis Area
Job Requirements
Required Qualifications:
- Four-year degree with emphasis in Computer Science, Computer Technology, MIS or equivalent experience.
- Minimum 5 years of designing and supporting network devices and network protocols (Network switches, Firewalls, Wireless controllers and APs, BGP, OSPF, EIGRP, SIP and layer 2 protocols Spanning Tree, VPC, VDC and VTP).
- Experience with Watchgaurd Firewall, Active Directory, NPS, and RADIUS configurations
- Good interpersonal skills to interact with customers and team members
- Strong organization skills to balance and prioritize work
- Ability to work independently and as part of a team
Job Description - 221135
Cloudera is looking for a highly experienced software engineer with strong expertise in Java development and a specialty in platform architecture to join the Cloudera Lens team.
Cloudera Lens is a high-fidelity, context-rich, and fully correlated self-service observability & optimization tool that analyzes the state and wellness of a customer’s environments and empowers them to proactively discover and address unknown unknowns in their data, scale operations without compromising on performance or costs, and expedite remediation of issues.
As a Java engineer, you will be working in a team of engineers led by an Engineering Manager, collaborating with other engineers and stakeholders in India, United States, and other countries around the globe.
Responsibilities:
- Lead, architecture, design, and implementation of key aspects of the Cloudera Lens data collection, data analytics, data correlations, and recommendations.
- Work with product management, engineering, UX, and documentation teams to deliver high-quality products.
- Interact with partners and customers to help define roadmap and shape the technology.
- Empower team members to deliver high-quality software at a fast pace.
Requirements:
- Proven track record of performance.
- Passionate about software engineering. Clean coding habits, attention to detail, and focus on quality and testability.
- Strong software engineering skills: object-oriented design, data structures, algorithms.
- Experience with containerization orchestration technologies: Kubernetes, Docker.
- Deep knowledge of system architecture, including process, memory, storage, and network management is highly desired.
- Experience with the following: Java, concurrent programming, and related areas.
- Experience with Java memory management, performance tuning and scaling
- Experience in building horizontally scalable products handling multi-terabyte datasets is desirable.
- Experience with relational and non-relational databases: PostgreSQL, Amazon S3.
- Strong oral and written communication skills in English.
Advantageous To Have:
- Experience in building enterprise-grade cloud products.
- Experience with building/using cross-functional observability products.
- BS or MS in computer science.
- Cloud experience: AWS, Azure, GCP.
- Python, Linux, Micro Services experience.
• Handling critical incidents/escalations, reviewing incidents and tracking towards closure • Good experience in SIEM tools, event logging and event analysis • Good knowledge in enterprise security products like Firewalls, IPS, Web/content Filtering tools, Compliance tools • Team Management, performance monitoring and prepare reports on weekly, monthly basis and share to stakeholders as needed • Good knowledge about common security attacks, targeted attacks • Good experience in forensic analysis, Packet Analysis tools like Wireshar • Assisting, mentoring L2/L3 analysts and groom them to move to next level • Contribute to continue monitoring and improvement of security posture of the organization • Having experience of managing team of 25+ team members across multiple locations.
|
• Primarily responsible for security event monitoring, management and response • Ensure incident identification, assessment, quantification, reporting, communication, mitigation and monitoring • Revise and develop processes to strengthen the current Security Operations Framework, Review policies and highlight the challenges in managing SLAs • Responsible for team & vendor management, overall use of resources and initiation of corrective action where required for Security Operations Center • Management, administration & maintenance of security devices under the purview of SOC which consists of state-of-the art technologies • Perform threat management, threat modeling, identify threat vectors and develop use cases for security monitoring • Responsible for integration of standard and non-standard logs in SIEM • Creation of reports, dashboards, metrics for SOC operations and presentation to Sr. Mgmt. • Co-ordination with stakeholders, build and maintain positive working relationships with them
|


- Professional experience in enterprise java software development using Spring MVC framework , RESTful APIs and SOA
- Experience working in Cloud(AWS)
- Outstanding problem solving skills
- API Development experience
- Exposure to monitoring tools such as ELK, Splunk
- Experience with Selenium for UI automated tests written in Cucumber or Scala
- Able to handle day-to-day challenges and owning the resolution of issues as they arise.
Years: 5-9 Years
Job Responsibilities
Primary:
- Responsible for security road map for EPDM application
- Train the CI-CD team on the required technologies security adoptation
- Lead the upskill program within the team
- Support Application architect with right inputs on security processes and tools
- Help setup DevSecOps for EPDM.
- Find Security vulnerability in development process and sealed secretes
- Support in defining the Three-tier architecture.
Secondary:
- Coordination with different IT stakeholders as and when needed
- Suggestion and Implementation of further tool chains towards DevOps and GitOps
- Responsible to train the peer colleagues
Skills:
Mandatory skill:
- Expert knowledge of container solutions. Must have >3 years of experience working with networking & debugging within Docker and Kubernetes.
- Hands-on experience with Kubernetes workload deployments using Kustomize & Helm.
- Good understanding of Bitnami, Hashicorp and other secrete management tools
- SAST/DAST integration in CI/CD pipeline - design, implementation Expert knowledge of Source Control Systems, build & integration tools (e.g., GIT, Jenkins & Maven).
- Hands-on experience with designing the CI/CD architecture & building pipelines (on On-prem, Cloud & Hybrid infrastructure services).
- Experience with Security log management tools (e.g. Splunk ELK/EFK stack, Azure monitor or similar).
- Experience with monitoring tools like Prometheus-Grafana & Dynatrace.
- Experience with Infrastructure as a Service / Cloud computing (preferably Azure).
- Expert in writing automation scripts in Yaml, Unix shell, linux shell.
- Pulumi would be added advantage.
Job Description
Roles and Responsibilities
· Design, Code, test, debug, implement and document complex WSO2 sequences.
· Monitoring and logging tool stacks - Splunk/ELK/Prometheus, Grafana 4 Design, Code, test, debug, implement and document complex WSO2 sequences.
· Perform Build activities for WSO2 ESB Middleware integration, involves writing XSLT, ESB coding, configuration, and Analysis activities.
· Trouble shooting various problems in different stages of development using logs files, Traces and Log Analyzer.
· Responsible for understanding the requirement, solution design, coordinate the development and testing activities (end to end)
· Guide the team regarding WSO2 platform best practices, framework, reusable artefacts and ensure code quality and timely deliverables.
· Work with functional and technical customers to determine solutions that drive additional business value.
· Work with Github, Azure Devops and CI tools to automate dev, build, deployment and testing.
· Good knowledge in messaging brokers: WSO2 Message Broker, Apache Kafka.
· Monitoring the server (Monitoring logs and WSO2 metrics).
· Desired Candidate Profile
Desired Candidate Profile
· Candidate must have minimum 2+ years of hands-on experience in WSO2, preferably with WSO2 certification.
· Extensive experience in Integration by using WSO2 Product Stack (API Manager 2.6/3.x, Enterprise Integrator 6.5 and Identity Server 5.7.0)
· Experience in Implementing APIs in WSO2 EI and On-boarding APIs into WSO2 API Manager
· Experience in WSO2 API Manager for designing API facades and designing and implementing API Proxies.
· Hands-on experience in designing and developing high volume web services using API Protocols and Data Formats (REST, JSON, SOAP & XML).
· Must have hands on experience / knowledge with CI tools to automate dev, build, deploy.
· Experience in programming languages: Java, JavaScript, Python
· Experience in on boarding web applications into WSO2 Identity Server for authentication and authorizatio
Qualifications
• Minimum of a Bachelor's or Master’s degree in Computer Science, or a related four-year degree.
• 4 years of hands on experience in Integration

Fast growing fintech organization based out of Noida
Join the team of a fast-growing fintech organization that has recently raised $25M via equity and debt fund to rapidly increase the ever-growing space of Fintech World.
A financial technology platform made up of the most knowledgeable, passionate, and creative people in our business. We recognize the power of financial services to break down barriers and make it easy for customers to avail banking, investments, and lending solutions – that responsibility inspires us to be the place where exceptional people want to do their best work and to provide them the tools to do so.
Looking for motivated, highly driven, and hardworking Software Engineers for the Neobanking platform for a leading Fintech organization in a highly collaborative, fast-paced, high-energy environment. You will build the platform for user onboarding, profile management, banking, lending, risk mitigation, and analytics for driving intelligence from customer interaction patterns. It’s an exciting time to join the team as we’re setting about building the next generation of our platform.
What you'll do:
- Design and implement scalable server-side solutions using Java.
- Write optimized front-end code using HTML, CSS, and Javascript
- Write unit, automation, and integration tests
- Implement quality application logging for operational monitoring at scale
- Investigate, debug and resolve production site issues
- Work with co-located teammates to deliver on common goals
Who you are:
- Professional experience in enterprise Java software development using Spring MVC frameworks, RESTful APIs, and SOA
- Proficiency in HTML/CSS/JavaScript/jQuery
- Experience with Docker and Kubernetes
- Experience with Microservices
- Experience with DevOps Technologies is plus
- Experience with Selenium for UI automated tests written in Cucumber or Scala
- Working knowledge of design patterns and CI/CD principles
- First-class communication skills in written and verbal form
- Outstanding problem-solving skills
- A commitment to producing high-quality code with an attention to detail
- Dedication and a self-motivated desire to learn
- A collaborative, team-orientated attitude
- 3-8 years of professional experience in Java software development
- Experience working in the Cloud (AWS)
- API development experience
- Exposure to monitoring tools such as ELK, Splunk
Information Security Specialist
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA.
We are looking Information Security Specialist who has the expertise and deep knowledge of Information security regulations, compliance, and SIEM tools, and the ability to develop, describe and implement Security Baselines and Policies.
It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges.
Key Qualifications
· Design, deploy, and support Information Security Solutions provided by BDS
· Assist clients to carry out the IT Risk Management assessment on both on-prem and cloud platforms
· Provide subject matter expertise on IT security compliances during the security audits to meet various security governances.
· Research and strategic analysis of existing, and evolving all IT and data security technologies
· Establish baselines to define required security controls for all infrastructure components and application stack
· Follow latest vulnerabilities and threats intelligence updates across a wide range of technologies and make recommendations for improvements in the security baselines.
· Overseeing security event monitoring, understand the impact, and coordinate remediation efforts
· Create and optimize the SIEM rules to adjust the specification of alerts in responding to incident follow up
· Must be able to work a flexible schedule during off-hours
Key Skills & Qualification
· Minimum of 4 years relevant work experience in information/cyber security, audit, and compliance
· Certifications in any of technical security specialty (e.g., CISA, CISSP, CISM)
· Experience in managing SIEM products like Arcsight, Qradar, Sumo Logic, RSA NetWitness Suite, ELK, Splunk
· Exposure of the security audit tools on public cloud platforms
· Solid understanding of the underlying LINUX/UNIX and Windows OS security architecture
· Certified Ethical Hacker would be a plus
· Handling of Security audits is a must
· Proven interpersonal skills while contributing to team effort by accomplishing related results
· Passion for learning new technologies and the ability to do so quickly.
http://www.banyandata.com" target="_blank">www.banyandata.com
Job Summary:
The Senior Forensic Analyst has strong technical skills and an eagerness to lead projects and work with our clients. Apply Incident Response, forensics, log analysis, and malware triage skills to solve complex intrusion cases at organizations around the world. Our consultants must be comfortable working in teams to tackle challenging projects, communicating with clients, and creating and presenting high-quality deliverables.
ROLES AND RESPONSIBILITIES
· Investigate breaches leveraging forensics tools including Encase, FTK, X-Ways, SIFT, Splunk, and custom investigation tools to determine the source of compromises and malicious activity that occurred in client environments. The candidate should be able to perform forensic analysis on:
· Host-based such as Windows, Linux, and Mac OS X
· Firewall, web, database, and other log sources to identify evidence and artifacts of malicious and compromised activity.
· Cloud-based platforms such as Office 365, Google, Azure, AWS…etc
· Perform analysis on identified malicious artifacts
· Contribute to the curation of threat intelligence related to breach investigations
· Excellent verbal and written communication and experience presenting technical findings to a wide audience of varying technical expertise
· Be responsible for integrity in analysis, quality in client deliverables, as well as gathering caseload intelligence.
· Responsible for developing the forensic report for breach investigations related to ransomware, data theft, and other misconduct investigations.
· Must also be able to manage multiple projects daily.
· Manage junior analysts and/or external consultants providing investigative support
· Act as the most senior forensic analyst, assisting staff, provide a review of all forensic work product to ensure consistency and accuracy, and support based on workload or complexity of matters
· Ability to analyze workflow, processes, tools, and procedures to create further efficiency in forensic investigations
· Ability to work greater than 40 hours per week as needed DISCLAIMER The above statements are intended to describe the general nature and level of work being performed. They are not intended to be an exhaustive list of all responsibilities, duties, and skills required personnel so classified.
SKILLS AND KNOWLEDGE
· Proficient with host-based forensics, network forensics, malware analysis, and data breach response
· Experienced with EnCase, Axiom, X-Ways, FTK, SIFT, ELK, Redline, Volatility, and open-source forensic tools
· Experience with common scripting or programming language, including Perl, Python, Bash, or PowerShell Role Description Senior Forensic Analyst
JOB REQUIREMENTS
· Must have at least 5+ years of incident response or digital forensics experience with a passion for cybersecurity
· Consulting experience preferred.
WORK ENVIRONMENT
While performing the responsibilities of this position, the work environment characteristics listed below are representative of the environment the employee will encounter: Usual office working conditions. Reasonable accommodations may be made to enable people with disabilities to perform the essential functions of this job.
PHYSICAL DEMANDS
· No physical exertion is required.
· Travel within or outside of the state.
· Light work: Exerting up to 20 pounds of force occasionally, and/or up-to 10 pounds of force as frequently as needed to move objects.


As a Perl developer, you will own and run critical systems for one of our large international clients in the fashion industry. You will be part of a high performing self-motivated team, collaborate with the client and be exposed to different layers of the client infrastructure.
- The role will involve incorporating enhancements via coding, perform data changes and support the core client platforms
- Have great attention to detail
- Should be comfortable to look after the systems and maintain it
- Take full responsibility to run the client systems
- Escalate to stakeholders when necessary
- Be self-driven and work with little supervision towards a common team and company purpose
Experience: Total 7 to 10 years of experience with minimum of 3 years in Perl
Requirements
You Rock at
- Perl 5.21
- Catalyst
- PostgreSQL
- CPAN for modules
- DBIX::Class (DBIC) (Objective-relational mapping)
- Test::More (unit testing)
You are good at
- Bitbucket, Puppet + Hiera, Jenkins
- JQuery + a bit of UI
- Plack, Selenium
- Linux Centos 6
- Grafana, Splunk
Benefits
- A fantastic working environment built on the principles of lean and self organisation;
- Fun, happy and politics-free work culture;
- Competitive salaries and benefits.
Note: We looking for immediate joiners. We expect the offered candidate should join within 15 days. Buyout reimbursement is available for 30 to 60 days notice period applicants who can ready join within 15 days.
- Proficient in Java, Node or Python
- Experience with NewRelic, Splunk, SignalFx, DataDog etc.
- Monitoring and alerting experience
- Full stack development experience
- Hands-on with building and deploying micro services in Cloud (AWS/Azure)
- Experience with terraform w.r.t Infrastructure As Code
- Should have experience troubleshooting live production systems using monitoring/log analytics tools
- Should have experience leading a team (2 or more engineers)
- Experienced using Jenkins or similar deployment pipeline tools
- Understanding of distributed architectures

Requires a bachelor's degree in area of specialty and experience in the field or in a related area. Familiar with standard concepts, practices, and procedures within a particular field. Relies on experience and judgment to plan and accomplish goals. Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager.
Designs, develops, and implements web-based Java applications to support business requirements. Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. Resolves technical issues through debugging, research, and investigation.
Additional Job Details:
Strong in Java, Spring, Spring Boot, REST and developing MicroServices.
Knowledge or experience , Cassandra preferred
Knowledge or experience on Kafka
Good to have but not must
Good to know:
Reporting tools like Splunk/Grafana
Protobuf
Python/Ruby