11+ FRD Jobs in Hyderabad | FRD Job openings in Hyderabad
Apply to 11+ FRD Jobs in Hyderabad on CutShort.io. Explore the latest FRD Job opportunities across top companies like Google, Amazon & Adobe.
Business analyst, Business analysis, Product owner, Requirement gathering, Agile, FRD, BRD
Qualified Business Analyst expertise in Data Analytics & AI, Digital Banking. Core domain is Wholesale and retail Banking.
Primarily we look for below following criteria :
- Good requirement gathering / BA skills
- Experience of Agile framework
- Domain – BFS : Wholesale / Retail etc.
- Data Analytics & AI, Digital Banking experience
Preferably IMMEDIATE JOINERS.
We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in
building large-scale data pipelines, real-time streaming solutions, and batch/stream
processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and
AWS Big Data services, with hands-on experience in implementing CDC (Change Data
Capture) pipelines and integrating multiple data sources and sinks.
Responsibilities
- Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
- Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
- Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
- Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
- Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
- Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
- Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
- Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
- Implement monitoring, logging, and alerting for critical data pipelines.
- Follow best practices for data security, compliance, and cost optimization in cloud environments.
Required Skills & Experience
- Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
- Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
- Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
- CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
- AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
- ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
- Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
- Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
- Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
- Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
- Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
- Functions for workflow orchestration.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- Experience in large-scale data lake / lake house architectures.
- Knowledge of data warehousing concepts and query optimisation.
- Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
- Exposure to ML/AI data pipelines is a plus.
Tools & Technologies (must-have exposure)
- Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
- Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
- Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
- Programming & Scripting: Python, SQL, Bash
- Orchestration: Airflow / Step Functions
- Version Control & CI/CD: Git, Jenkins/CodePipeline
- Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
About Medvarsity
Medvarsity is a leader in healthcare education, specializing in upskilling professionals with innovative digital learning solutions. We are expanding our Inside Sales team to help drive growth and support our learners.
Key Responsibilities
- Handle inbound and outbound sales calls with prospective and existing customers.
- Engage in tele sales activities to promote Medvarsitys healthcare courses and products.
- Achieve monthly and quarterly sales targets set by the management.
- Maintain strong follow-up with leads through calls, emails, and messages.
- Understand customer requirements, provide product information, and guide them through enrollment.
- Learn about new courses and healthcare related educational products to effectively communicate with clients.
- Update daily sales reports and customer details in the CRM system.
- Collaborate with the marketing team for effective lead management.
Required Skills & Attributes
- Excellent spoken communication skills in both English and Hindi.
- Ability and willingness to engage in outbound (cold calling) and inbound sales calls.
- Quick learner and receptive to training in healthcare education products.
- Confident, persuasive, and target-oriented approach.
- Basic computer and CRM software skills.
- Resilience to handle objections and turn prospects into enrollments.
Qualifications
- Graduation in any discipline (preferred, not mandatory for exceptional communicators).
- Prior experience in inside sales, tele sales, or customer service is a plus. Freshers showing strong communication are welcome
- EdTech industry or similar kind of B2C sales would be preferable.
Mandatory skills:
- Programming skills in Python, Robot framework, Selenium, Shell scripting
- Experience on L2/L3 protocols of VLAN/DHCP/LACP/IGMP/PPPoE.
- Should be familiar with device configuration protocols of CLI/NETCONF/SNMP.
- Experience in telecom technologies like DSLAM/GPON/G.fast/Next gen broadband technologies is highly recommended
- Knowledge on regression/performance/load/scale/stability test areas
- Hands on experience with Common industry equipment like Spirent test center/ixia/Abacus/Shenick(TeraVM)/N2X Traffic generators.
- Exposure to debug tools such as Wireshark/tcpdump.
Job Requirements:
- Knowledge on software Test cycle, test plan and test case creation
- Understanding of End to end test setup topology and debugging.
- Ability to perform System level Functional and Non-functional tests.
- Familiar with Manual test life cycle
- Designing and writing test automation scripts using automation frameworks
- Exposure to CI/CD pipe line implementation and maintenance using Jenkins, Groovy scripting
- Linux skills (system configuration and administration, containers, networking experience and d) such as sockets and database management
- Good debugging skills and knowledge on various debug tools
- Bug reporting/tracking and providing logs.
Role: Functions Developer (Embedded c - Algorithm / Driving Functions Development)
Location: Hyderabad
Duration:Fulltime
Job Description:
- Design and development of automotive feature/function software/components (ACC, AEB, TSR, LKA etc.) for ADAS/AD systems
- Coordination and regular interaction with different stakeholders and teams like testing, requirements, leads etc.
- Participate in SW requirement generation, SW architecture, detailed design etc.
Requirement:
- 3-7 years of experience in development of Algorithm & Functions for advance driver assist systems (ADAS), Autonomous driving (AD)
- Development experience with safety critical systems
- Experienced in development using MATLAB Simulink, TargetLink, Stateflow
- Experience in modelling and validation of control systems
- Knowledge of SIL, Performance Test, Functional testing
- Embedded software development using C, C++
- Issue management and version control
- Knowledge of ASPICE processes, Static analysis, MISRA checks etc.
- Strong written and verbal communication skills
- Proactive approach for problem solving
Good to have:
- Knowledge of ADAS/AD functions (ACC, TSR, AEB, LCA etc), Data Analysis
- Experienced in managing and authoring of function specification requirements
- Familiarity with AUTOSAR RTE
Nice to have:
- AUTOSAR, Functional Safety (ISO26262) exposure
- Scripting Knowledge - Python, MATLAB
- Working knowledge of automotive protocols like CAN, Ethernet etc.
|
|
|

Urgent Openings with one of our client
Experience : 3 to 7 Years
Number of Positions : 20
Job Location : Hyderabad
Notice : 30 Days
1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark – Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus
Hadoop and Hive requirements as good to have or understanding of is enough.
Skills Required
- 8+ years of industry work experience
- Proven experience as a Full Stack Developer or similar role
- 3+ years of web application development with JavaScript as full stack
- Full stack experience designing and building scalable applications from end-to-end
- Excellent JavaScript / Typescript skills
- Strong proficiency in React (hooks knowledge is plus)
- Strong proficiency in Nodejs
- Good HTML5 / CSS3 skills with expertise on responsive web design
- Must have Design and development experience in Micro services using NodeJS and TypeScript
- Experience with NoSQL databases such as MongoDB, mongoose, aggregation framework and Redis
- Experience with Web sockets and related frameworks (e.g. Socket.IO)
- Experience in using and developing GraphQL APIs
- Experience in performance tuning
- Knowledge of code versioning tools such as Git, Mercurial or SVN.
- Open minded to take up any challenge, research and provide solutions
- Great attention to detail
- Testing libraries – jest, testing-library is plus
- TDD / BDD experience is plus
- Experience with AWS, K8S, CI/CD is plus
- Familiar with SDLC methodologies like SCRUM, AGILE, Continuous Integration
Roles & Responsibilities
- Design – Analyze, design & document the system / solution based on the business needs which is scalable, resilient and maintainable with low overhead for both client and server side
- Problem solving – Solve the challenges and problems faced by team by guiding and team with best practices
- Coordinate - Communicate system requirements to developers; explain system structure to them and provide assistance
- Code Reviews – Perform code reviews
- Planning – Plan and assign tasks to team members
- Develop – Develop micro services and micro frontends
Work Environment Details:
Founded in 2011 and trusted by companies across 70 countries and more, KnowledgeHut is the skills solutions provider that organizations and individuals the world over count on to innovate faster and create progress.
We help technologists master their craft and take control of their careers. We empower businesses everywhere to build adaptable teams, speed up release cycles and become scalable, reliable, and secure.
Our mission to democratize technology skills is what drives us, and our values are at the helm of how we work together. We thrive in an environment with creativity around every corner, challenges that keep us on our toes, and peers who inspire us to be the best we can be.
If you have the development skills required and are excited to come to work every day knowing you’re helping our customers build the skills that power innovation, we want to hear from you!
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team



