11+ SOD Jobs in Hyderabad | SOD Job openings in Hyderabad
Apply to 11+ SOD Jobs in Hyderabad on CutShort.io. Explore the latest SOD Job opportunities across top companies like Google, Amazon & Adobe.
Position Overview:
We are seeking a highly skilled and motivated Identity and Access Management (IAM) Developer/IAM Engineer to join our dynamic team. The successful candidate will be responsible for designing, implementing, and maintaining robust IAM solutions that ensure secure access to our systems and applications. This role involves collaborating with various teams to understand their requirements and deliver scalable and efficient IAM systems.
Key Responsibilities:
- The technical developer must comprehend and implement customization based on specifications in business requirement documents or sprint tasks.
- They should review the solution with the business analyst before presenting it to external stakeholders or deploying it to any production environment.
- All customizations, fixes and tests are correctly documented with all related scenarios, evidence and stored in the corresponding folders/ tools.
- All deployments must adhere to application factory guidelines and be validated by team members.
- Developers are responsible for productive system deployments using the ‘4 eyes principle’, ensuring dual verification. Accurately and methodically complete all required information in predefined templates for cutover plans, system testing, UAT, change requests, and application team communications as instructed.
- Comprehend existing template content, make necessary updates, ensure team alignment, and address issues before submitting to the application team for review.
- Maintain frequent communication with team members to report progress, ensure alignment, and promptly address any issues or blockers encountered.
- Prepare and internally review test data for demo/ UAT sessions with the application team, ensure sessions are recorded and structured logically, and assess all feedback received for potential integration.
- Assess feasibility, develop configurations, prepare test data, conduct demo and UAT sessions, create production deployment documents, submit change requests, execute go-live, and facilitate handover to Operations.
Required Qualifications:
- Have a minimum of 2 years of hands-on experience with the identity product (Saviynt EIC 23.x/24.x)
- Possess comprehensive understanding of Saviynt IGA architecture, with practical experience in application onboarding, workflow implementation, Segregation of Duties (SOD), certifications, and custom jar development.
- Know and have experience working in agile environments (Scrum), being capable of following the existing protocols and ceremonies that will be part of the day-to-day basis.
- JSON working knowledge.
- Build SQL queries when required (MySQL 8.0 as backend)
- Knowledge of APIs (SOAP, REST)
- Capable of using tools to consume APIs like Postman or SOAP UI
- Basic knowledge on directory services and applications like Active Directory, Azure AD, Exchange (online/ on-prem)
Google Data Engineer - SSE
Position Description
Google Cloud Data Engineer
Notice Period: Immediate to 30 days serving
Job Description:
We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.
• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.
• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.
• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.
• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.
• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.
• Optimize query performance and data storage across structured and unstructured datasets.
• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.
Required Skills & Qualifications:
• 8-15 years of experience
• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.
• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.
• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).
• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.
• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.
• Familiarity with data formats such as Avro, ORC, Parquet.
• Experience handling large-scale data migrations and implementing data lake architectures.
• Expertise in data modeling, data warehousing, and distributed data processing frameworks.
• Deep understanding of data formats such as Avro, ORC, Parquet.
• Certification in GCP Data Engineering Certification or equivalent.
Good to Have:
• Experience in BigQuery, Presto, or equivalent.
• Exposure to Hadoop, Spark, Oozie, HBase.
• Understanding of cloud database migration strategies.
• Knowledge of GCP data governance and security best practices.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation
Position: Full-Stack Developer
Experience level: 5 + years
Location: Bangalore / Hyderabad
Tech stack: Node, Loopback, React
Essential Duties:
- Design and develop technical solutions that meet user needs with respect to functionality, performance, scalability, and reliability.
- Drive department best practices, guidelines implementation, and adhering to standards.
- Build and maintain medium-sized software platforms in the cloud.
- Build elegant, maintainable, well-documented, secure code.
Good to have:
- Development using a test-driven approach, AWS
Qualifications:
- 5+ years of progressive development experience as a Software Engineer.
- Bachelor's degree in Computer Science/Engineering or equivalent work experience.
- Strong hands-on development experience in Node JS and React.
- Hands-on Experience with RESTful Webservices/API and web-based applications is preferred.
- Experience implementing solutions using Agile delivery methodologies.
Job ID: DP0601
Job Description:-
Job Location- Bangalore, Hyderabad(Type- WFH/Hybrid)
Experience Level- 4-8 Yrs.
Angular JD keywords:
- Experience in Angular 2+ (Latest Version Preferred)
- Strong understanding of the OOPS concept
- Strong understanding of Typescript
- Knowledge of Observables
- Advance knowledge of JavaScript
• Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)
• should have good hands-on Spark (spark with java/PySpark)
• Hive
• must be good with SQL's(spark SQL/ HiveQL)
• Application design, software development and automated testing
Environment Experience:
• Experience with implementing integrated automated release management using tools/technologies/frameworks like Maven, Git, code/security review tools, Jenkins, Automated testing, and Junit.
• Demonstrated experience with Agile or other rapid application development methods
• Cloud development (AWS/Azure/GCP)
• Unix / Shell scripting
• Web services , open API development, and REST concepts
- Proficient with Objective-C,Swift,Cocoa Touch and UIKit.
- Experience with iOS frameworks such as Core Data, Core Animation, etc.
- Knowledge of Apple’s design principles,interface guidelines and UI/UX standards
- Experience with performance and memory tuning with tools such as Instruments.
- Familiarity with cloud message APIs and push notifications
- Proficient understanding of code versioning tools such as Git and SVN
- Experience in Payment Integration, Push Notification & Third Party Integration.
- Experienced with Apple Approval developed and Distribution Process, AdHoc Enterprise distribution.
- Worked on various architecture such as MVC, MVVM, Singleton, Delegate and Notification patterns.
- Good to knowledge /experience in developing GUI for C5 voip applications.
- Good to knowledge / on webrtc, various voip standards.
About the Role
Dremio’s SREs ensure that our internal and externally visible services have reliability and uptime appropriate to users' needs and a fast rate of improvement. You will be joining a newly formed team that will spearhead our efforts to launch a cloud service. This is an opportunity to join a very fast growth startup and help build a cloud service from the ground up.
Responsibilities and Ownership
- Ability to debug and optimize code and automate routine tasks.
- Evangelize and advocate for reliability practices across our organization.
- Collaborate with other Engineering teams to support services before they go live through activities such as system design consulting, developing software platforms and frameworks, monitoring/alerting, capacity planning and launch reviews.
- Analyze and optimize our core product by developing and implementing reliability and performance practices.
- Scale systems sustainably through automation and evolve systems by pushing for changes that improve reliability and velocity.
- Be on-call for services that the SRE team owns.
- Practice sustainable incident response and blameless postmortems.
Qualifications
- 6+ years of relevant experience in the following areas: SRE, DevOps, Cloud Operations, Systems Engineering, or Software Engineering.
- Excellent command of cloud services on AWS/GCP/Azure, Kubernetes and CI/CD pipelines.
- Have moderate-advanced experience in Java, C, C++, Python, Go or other object-oriented programming languages.
- You are Interested in designing, analyzing and troubleshooting large-scale distributed systems.
- You have a systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- You have a great ability to debug and optimize code and automate routine tasks.
- You have a solid background in software development and architecting resilient and reliable applications.
In this role, the individual will be part of the engineering team and will be responsible for
* Participating and collaborating with Product Owner/ Cross-functional teams in the organization to understand the business requirements and to deliver solutions that can scale.
* Design and Develop API in NodeJS using Express JS framework with relevant middleware integrations.
* Designing and implementing software that is simple, intuitive, and easy to use with Test first driven Approach.
* Proactively anticipating problems and keeping the team and management informed in a timely manner.
**Basic Requirements:**
* 1-2 years experience in designing and building secure large-scale systems.
* Deep experience in one or more relevant front-end frameworks such as React.
* Ability to rapidly prototype and adjust in response to customer feedback
* Strong problem solving and troubleshooting skills.
* Solid coding practices including peer code reviews, unit testing, and a preference for agile development.
* Expertise in NodeJS and JavaScript;
* Strong in Jasmine, Karma, Jest, Mocha, Cucumber JavaScript testing frameworks.
* Strong in REST, GraphQL API frameworks.
* Knowledge of securing REST APIs using OAuth, JWT, etc.
* Experience in designing and working with No SQL Databases such as Mongo etc.
* Experience in designing and working with SQL Databases such as MySQL, Postgres, etc.
* Experience in building solutions on top of any of cloud technologies like AWS, Google Cloud.
* Excellent written and verbal communication skills.
* Experience with building service side applications with object-oriented design and Multi-page MVC architecture.
* Actively practicing professional software engineering best practices for the full software development life-cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.







