
About Teradata
About
Connect with the team
Similar jobs

Job Title: Sr. IT Analyst – IAM & Office 365
Location: Sector 32, Gurgaon
Work Mode: Hybrid
Shift Timing: 10:30 AM – 7:30 PM
Working Days: Monday to Friday (Should be comfortable working on weekends if needed)
About the Role
We are looking for a seasoned IT Security professional with a strong focus on Identity and Access Management (IAM) and Office 365 administration. This role is critical to ensuring secure access to enterprise systems, managing user identity lifecycles, and supporting ongoing cloud security initiatives.
You will work with cross-functional teams to implement security best practices, support operational tasks, and contribute to projects around IAM automation, Office 365 feature rollouts, and policy implementations.
Key Responsibilities
Identity and Access Management (IAM):
- Administer and maintain IAM systems across cloud and on-prem environments.
- Manage user provisioning, de-provisioning, and access reviews.
- Implement and optimize Conditional Access Policies, MFA, and RBAC.
- Integrate IAM solutions with Azure AD, Office 365, and other enterprise tools.
- Automate workflows for identity lifecycle management and secret/certificate rotation.
- Maintain documentation such as SOPs and process flows.
Office 365 Administration:
- Manage services like SharePoint Online, Teams, and PowerApps.
- Configure and monitor security features like DLP and Defender for Office 365.
- Troubleshoot Office 365 issues and respond to service requests.
- Ensure Office 365 logs are properly integrated into the SIEM system.
Operational Security & Projects:
- Conduct user access audits, incident handling, and vulnerability remediation.
- Participate in security tool upgrades and rollout of new O365 features.
- Ensure compliance with internal policies and external standards.
- Drive improvements in access control and security posture.
Requirements
- Education: Bachelor’s in Computer Science, IT, Cybersecurity, or a related field.
- Experience:
- 4+ years in IT Security with expertise in IAM and Office 365.
- Hands-on experience with Azure AD, MFA, Conditional Access, and O365 security.
- Technical Skills:
- Strong knowledge of IAM principles, RBAC, authentication protocols.
- PowerShell scripting for automation.
- Exposure to Microsoft Graph API is a plus.
- Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration.
Good to Have
- Microsoft Certification – MS-102 or equivalent.

Strong proficiency in C#, .NET Core/.NET 6/8, ASP.NET MVC/Web API.
Hands-on experience with IDP integrations (Azure AD B2C, Okta, Auth0, or similar), including configuring applications, managing user flows, and troubleshooting SSO issues.
Good knowledge of OAuth 2.0, OpenID Connect, SAML protocols.
Experience with RESTful API development and SQL Server/Entity Framework.
Familiarity with frontend frameworks (Angular/React) is an advantage.
Experience with Azure/AWS cloud services and DevOps practices is preferred.
Excellent problem-solving and debugging skills.
- Experience in API development using Java will be a plus.
- Excellent knowledge and experience in writing testable, scalable, flexible, robust and efficient web applications using JavaEE 6/7 technologies, specifically, Spring core, Spring Boot Spring Data, spring batch and JPA
- Experience in successfully deploying Java-based applications in production and understanding load-balancing, authentication, and fault tolerance through Tom Cat.
- Experience in database modeling (MySQL/NoSQL databases such as Mongo DB)
- Knowledge of integrating with Ant, Maven, GIT and Shell scripting.
- Strong backend experience to develop Data Layer using at least one of the ORM frameworks like Hibernate, JPA etc.
- Strong RDBMS Skills and SQL skills. Experience in MySQL, Teradata and warehousing databases.
-
Experience in Analytics frameworks and visualization products.
- Excellent knowledge and experience of Maven, Continuous Integration, and Continuous Delivery with Jenkins.
• Experience with JavaScript frameworks, especially Angular is a definite plus.
Java 8, J2EE , Spring Boot, Microservices, Apache Spark, DevOps, Advanced SQL, preferably with
expertise in Data engineering/Data analytics,
ELK(Elastic Search, Logstash , Kibana) stack, Teradata, any No SQL database, Hands on experience in
maintaining products on Cloud Technologies like PCF, Azure, Docker, Kubernetes, etc, NodeJS,
Angular 2x, GitLab with CI/CD, Hands on experience in Unix server, Shell scripting, Large data
processing, Performance tuning, Experience in working in various Test Automation frameworks like
Selenium, Test NG, Python, Cucumber, Karma, Karate/ Jasmine, etc
Experience in using Eclipse, Spring tool suite, Project building tools - Maven, Gradle, etc, JIRA for ALM.
- At least 5 year of experience in Cloud technologies-AWS and Azure and developing.
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
Company Name: Petpooja!
Location: Ahmedabad
Designation: DevOps Engineer
Experience: Between 2 to 7 Years
Candidates from Ahmedabad will be preferred
Job Location: Ahmedabad
Job Responsibilities: - -
- Planned, implement, and maintain the software development infrastructure.
- Introduce and oversee software development automation across cloud providers like AWS and Azure
- Help develop, manage, and monitor continuous integration and delivery systems
- Collaborate with software developers, QA specialists, and other team members to ensure the timely and successful delivery of new software releases
- Contribute to software design and development, including code review and feedback
- Assist with troubleshooting and problem-solving when issues arise
- Keep up with the latest industry trends and best practices while ensuring the company meets configuration requirements
- Participate in team improvement initiatives
- Help create and maintain internal documentation using Git or other similar applications
- Provide on-call support as needed
Qualification Required:
1. You should have Experience handling various services on the AWS cloud.
2. Previous experience as a Site reliability engineer would be an advantage.
3. You will be well versed with various commands and hands-on with Linux, Ubuntu administration, and other aspects of the Software development team requirement.
4. At least 2 to 7 years of experience with managing AWS Services such as Auto Scaling, Route 53, and various other internal networks.
5. Would recommend if having an AWS Certification.
Role & Responsibilities
- Application Architecture: Design and implement application environment
- Manage the configuration and operation of client-based (on-premise) computer operating systems
- Monitor the system daily and respond immediately to security or usability concerns
- Create and monitor the disaster recovery (DR) of all servers.
- Respond and assign a team to resolve help desk requests
- Monitor and maintain server functionality and security issue.
- Administrate infrastructure, including firewalls, databases, malware protection software and other processes
- Automation configuration management using either Ansible, Puppet, Chef or an equivalent
- Manage and administer servers, networks, and applications such as DNS, FTP, and Web servers.
- Troubleshoot in-house network issues and fix them.
- Provide solutions to complex problems on the integration of various technologies
- Design plans as well as lead initiatives for the optimization and restructuring of network architecture
- Monitor the environmental conditions of a data center and cloud servers to ensure they are optimum for servers, routers, and other devices
- Collaborate with IT handlers, sales, and data center managers to develop an action plan for improved operations
- Conduct inspections on power and cooling systems to ensure they are operational and efficient
- Resolve operational, infrastructure or hardware incidents in a data center and cloud servers.
- Monitor and maintain company assets
- Infra-team management and skills enhancement (training) plans and execution
Skills
- In-depth knowledge of the Linux Operating System
- Expertise in Shell and/or Python scripting
- In-depth knowledge of any of the CI/CD tools like Jenkins/GitLab etc.
- Basic knowledge of monitoring tools like Zabbix/ Nagios etc.
- Expertise in any one of the cloud providers like AWS, Google Cloud, Microsoft Azure and other cloud solution providers
- Strong experience with SQL and MySQL
- A working understanding of code and script (PHP, Python, Angular and NodeJS)
- Ability to use a wide variety of open-source technologies
- Knowledge of best practices and IT operations
- Basic experience with VMware
- Advanced knowledge of system vulnerabilities and security issues

What is the role?
You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project
Key Responsibilities
- Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
- Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
- Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
- Optimize performance to meet high throughput and scale
What are we looking for?
- 4+ years of relevant industry experience.
- Working with data at the terabyte scale.
- Experience designing, building and operating robust distributed systems.
- Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
- Building and leading teams.
- Working knowledge of relational databases like Postgresql/MySQL.
- Experience with Python / Spark / Kafka / Celery
- Experience working with OLTP and OLAP systems
- Excellent communication skills, both written and verbal.
- Experience working in cloud e.g., AWS, Azure or GCP
Whom will you work with?
You will work with a top-notch tech team, working closely with the architect and engineering head.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company
We are
We strive to make selling fun with our SaaS incentive gamification product. Company is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.
We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.
Way forward
If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.
Minimum 2 years of work experience on Snowflake and Azure storage.
Minimum 3 years of development experience in ETL Tool Experience.
Strong SQL database skills in other databases like Oracle, SQL Server, DB2 and Teradata
Good to have Hadoop and Spark experience.
Good conceptual knowledge on Data-Warehouse and various methodologies.
Working knowledge in any of the scripting like UNIX / Shell
Good Presentation and communication skills.
Should be flexible with the overlapping working hours.
Should be able to work independently and be proactive.
Good understanding of Agile development cycle.


