11+ IBM Tivoli Identity Manager Jobs in Bangalore (Bengaluru) | IBM Tivoli Identity Manager Job openings in Bangalore (Bengaluru)
Apply to 11+ IBM Tivoli Identity Manager Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest IBM Tivoli Identity Manager Job opportunities across top companies like Google, Amazon & Adobe.
1) Hands on experience with Tivoli - ITM v5.1 troubleshooting.
2) Familiar with Web Sphere Application Services
3) Familiar with Active Directory configuration
4) Knowledge on LDAP, SSO.. Etc
5) Knowledge on Identity & Access Management process
Qualiification:
- BTECH or Equivalent degree.
- 6 to 8 years of total experience.
Job Title: Infrastructure Engineer
Experience: 4.5Years +
Location: Bangalore
Employment Type: Full-Time
Joining: Immediate Joiner Preferred
💼 Job Summary
We are looking for a skilled Infrastructure Engineer to manage, maintain, and enhance our on-premise and cloud-based systems. The ideal candidate will have strong experience in server administration, virtualization, hybrid cloud environments, and infrastructure automation. This role requires hands-on expertise, strong troubleshooting ability, and the capability to collaborate with cross-functional teams.
Roles & Responsibilities
- Install, configure, and manage Windows and Linux servers.
- Maintain and administer Active Directory, DNS, DHCP, and file servers.
- Manage virtualization platforms such as VMware or Hyper-V.
- Monitor system performance, logs, and uptime to ensure high availability.
- Provide L2/L3 support, diagnose issues, and maintain detailed technical documentation.
- Deploy and manage cloud servers and resources in AWS, Azure, or Google Cloud.
- Design, build, and maintain hybrid environments (on-premises + cloud).
- Administer data storage systems and implement/test backup & disaster recovery plans.
- Handle cloud services such as cloud storage, networking, and identity (IAM, Azure AD).
- Ensure compliance with security standards like ISO, SOC, GDPR, PCI DSS.
- Integrate and manage monitoring and alerting tools.
- Support CI/CD pipelines and automation for infrastructure deployments.
- Collaborate with Developers, DevOps, and Network teams for seamless system integration.
- Troubleshoot and resolve complex infrastructure & system-level issues.
Key Skills Required
- Windows Server & Linux Administration
- VMware / Hyper-V / Virtualization technologies
- Active Directory, DNS, DHCP administration
- Knowledge of CI/CD and Infrastructure as Code
- Hands-on experience in AWS, Azure, or GCP
- Experience with cloud migration and hybrid cloud setups
- Proficiency in backup, replication, and disaster recovery tools
- Familiarity with automation tools (Terraform, Ansible, etc. preferred)
- Strong troubleshooting and documentation skills
- Understanding of networking concepts (TCP/IP, VPNs, firewalls, routing) is an added advantage
Key Responsibilities
- Flow Development & Automation
- Develop, maintain, and enhance CAD automation scripts and flows for physical design (place-and-route, timing closure, physical verification, etc.).
- Integrate and validate EDA tools for synthesis, floorplanning, clock tree synthesis, routing, and sign-off.
- EDA Tool Support
- Work closely with design teams to debug and resolve CAD/EDA tool issues.
- Collaborate with EDA vendors for tool evaluations, feature requests, and bug fixes.
- Physical Verification & Sign-Off
- Build and maintain flows for DRC, LVS, ERC, IR drop, EM, and timing sign-off.
- Ensure physical design flows meet foundry requirements and tapeout schedules.
- Methodology Development
- Develop best practices and guidelines for efficient design closure.
- Evaluate new EDA technologies and propose improvements to existing workflows.
Design, implement, and improve the analytics platform
Implement and simplify self-service data query and analysis capabilities of the BI platform
Develop and improve the current BI architecture, emphasizing data security, data quality
and timeliness, scalability, and extensibility
Deploy and use various big data technologies and run pilots to design low latency
data architectures at scale
Collaborate with business analysts, data scientists, product managers, software development engineers,
and other BI teams to develop, implement, and validate KPIs, statistical analyses, data profiling, prediction,
forecasting, clustering, and machine learning algorithms
Educational
At Ganit we are building an elite team, ergo we are seeking candidates who possess the
following backgrounds:
7+ years relevant experience
Expert level skills writing and optimizing complex SQL
Knowledge of data warehousing concepts
Experience in data mining, profiling, and analysis
Experience with complex data modelling, ETL design, and using large databases
in a business environment
Proficiency with Linux command line and systems administration
Experience with languages like Python/Java/Scala
Experience with Big Data technologies such as Hive/Spark
Proven ability to develop unconventional solutions, sees opportunities to
innovate and leads the way
Good experience of working in cloud platforms like AWS, GCP & Azure. Having worked on
projects involving creation of data lake or data warehouse
Excellent verbal and written communication.
Proven interpersonal skills and ability to convey key insights from complex analyses in
summarized business terms. Ability to effectively communicate with multiple teams
Good to have
AWS/GCP/Azure Data Engineer Certification
We are looking to hire candidates (Freshers/Experienced) from any Graduate/Post Graduate courses for the role of Relationship Manager
The candidates should be proficient in English and Hindi (multilingual is also preferred), have knowledge of MS Excel and have experience (if any) in the e-commerce sector (minimum 6 months-1 per year) would be preferred, but not mandatory.
Working days- 6 (No week-off on weekends)
Shift type- Day Shift
Location- Bangalore
Looking for candidates to work in Kaikondrahalli, Sarjapur Road, Bangalore - 560035.
Salary Offered
UG
CTC offered: 4,80,000 LPA (3,00,000 LPA Fixed+1,80,000 Variable)
PG
CTC offered: 5,10,000 LPA (3,30,000 LPA Fixed+1,80,000 Variable)
Any candidate with a minimum of 6 months- 1-year experience
CTC offered: 5,10,000 LPA (3,30,000 LPA Fixed+1,80,000 Variable)
Any branch of Graduate and Post Graduate is welcome
Location: Bengaluru
Department: - Engineering
Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work.
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
● Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers.
● 3-5 years of strong experience in data analytics and in developing data pipelines.
● Very good expertise in Looker
● Strong in data modeling, developing SQL queries and optimizing queries.
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive).
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera)
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Looking for talented and passionate people to be part of the team for an upcoming project at client location.
QUALIFICATION AND EXPERIENCE
- Preferably have a working experience of 4 Years and more , on production PostgreSQL DBs.
- Experience of working in a production support environment
- Engineering or Equivalent degree
- Passion for open-source technologies is desired
ADDITIONAL SKILLS
- Install & Configure PostgreSQL, Enterprise DB
- Technical capabilities PostgreSQL 9.x, 10.x, 11.x
- Server tuning
- Troubleshooting of Database issues
- Linux Shell Scripting
- Install, Configure and maintain Fail Over mechanism
- Backup - Restoration, Point in time database recovery
- A demonstrable ability to articulate and sell the benefits of modern platforms, software and technologies.
- A real passion for being curious and a continuous learner. You are someone that invests in yourself as much as you invest in your professional relationships.
RESPONSIBILITIES
- Monitoring database performance
- Optimizing Queries and handle escalations
- Analyse and assess the impact and risk of low to medium risk changes on high profile production databases
- Implement security features
- DR implementation and switch over
WHAT IS IN IT FOR YOU?
You would be adding a great experience of working with a leading open source solutions company in South East Asia region to your career. You would get to learn from the leaders and grow in the industry. This would be a great opportunity for you to grow in your career through continuous learning, adding depth and breadth of technologies. Since our client work with leading open source technologies and engage with large enterprises, it creates enormous possibilities for career growth for our teamResponsibilities
He/She should be more pro-active on development with new flow design and security practices He/She will be developing new user-facing features Build reusable code and libraries for future use Optimize application for maximum speed and scalability Assure that all user input is validated before submitting to back-end Collaborate with other team members and stakeholders Use coding to develop the aesthetics implemented within a website or modules, including UI Elements
Qualification and experience
IT Industry overall experience should be minimum 2 Years Relevant Experience minimum should be 1yr in Drupal - Drupal Custom Module / Themes development Or Symfony development. ACQUIA Certification (Not mandatory)
Key skills
Experience in Core PHP using MVC and MVVM Architecture. Knowledge of database operations with MySQL/MariaDB, Stored procedures. Strong Knowledge of front-end technologies (HTML5, CSS3, JavaScript, jQuery). Knowledge and experience of OOPS. Ability to write APIs including Restful APIs. Knowledge of continous intergation practices like CI-CD. Strong documentation, communication, and team collaboration skills. Experience in iterative development methodologies like Agile / Scrum He should understand the technical feasibility of UI/UX designs Working experience with React / Angular, will be value added to the position (Not mandatory)
• Develop and Maintain IAC using Terraform and Ansible
• Draft design documents that translate requirements into code.
• Deal with challenges associated with scale.
• Assume responsibilities from technical design through technical client support.
• Manage expectations with internal stakeholders and context-switch in a fast paced environment.
• Thrive in an environment that uses Elasticsearch extensively.
• Keep abreast of technology and contribute to the engineering strategy.
• Champion best development practices and provide mentorship.
What we’re looking for
• An AWS Certified Engineer with strong skills in
o Terraform
o Ansible
o *nix and shell scripting
• Preferably with experience in:
o Elasticsearch
o Circle CI
o CloudFormation
o Python
o Packer
o Docker
o Prometheus and Grafana
o Challenges of scale
o Production support
• Sharp analytical and problem-solving skills.
• Strong sense of ownership.
• Demonstrable desire to learn and grow.
• Excellent written and oral communication skills.
• Mature collaboration and mentoring abilities.
Strong knowledge of CICS, CICSplex and SMP/E
Good knowledge and experience with Mainframe MQ administration
Experience working with: JCL, IPCS, diagnostic tools such as Omegamon or other equivalent tools
Skilled at troubleshooting problems in a CICS environment
Programming skills - COBOL,PL/1, C, REXX, Assembler
Proficient in working with high availability environment along with alignment to process (ITIL)
Experience with large complex environments running critical and 24*7 applications
Experiencewith aspects of MQ administration and operations
Proficiency with Assembler programming



