11+ SAS/STAT Jobs in Hyderabad | SAS/STAT Job openings in Hyderabad
Apply to 11+ SAS/STAT Jobs in Hyderabad on CutShort.io. Explore the latest SAS/STAT Job opportunities across top companies like Google, Amazon & Adobe.
Job Overview: We are seeking a dedicated Senior Statistical Programmer to join our dynamic team. You will be responsible for the development, quality control, and documentation of statistical (SAS) programming deliverables for clinical research studies.
Key Responsibilities:
1. Lead and oversee the development of SAS programs for the management and statistical analysis of clinical trial data.
2. Develop, test, and validate statistical tables, listings, and graphs (TLGs) in support of the statistical analysis plan.
3. Support the generation and review of protocols, data management plans, study reports, and other regulatory documents.
4. Provide input to the statistical analysis plan, table shells, data integration plans, and mock-ups.
5. Ensure data quality by designing and validating key data checks and listings.
6. Develop specifications for derived datasets and perform data transformation as necessary.
7. Collaborate effectively with data management and biostatistics teams, clinicians, and other project stakeholders.
8. Guide junior programmers and provide mentoring as needed.
9. Keep abreast of new developments in SAS programming and clinical research, and act as a subject matter expert for the team.
Required Qualifications and Skills:
1. Bachelor's or master’s degree in Statistics, Mathematics, Computer Science, or a related field.
2. Minimum of 3 plus years of statistical programming experience in the pharmaceutical, biotech, or CRO industry.
3. Excellent SAS programming skills and proficiency in SAS/Base, SAS/Macro, SAS/Graph, and SAS/STAT.
4. Familiarity with CDISC SDTM and ADaM data standards.
5. In-depth understanding of clinical trial data and extremely complex statistical methods.
6. Excellent problem-solving skills and a proactive approach to identifying and resolving issues.
7. Strong written and verbal communication skills, with the ability to translate complex data into understandable results.
8. Proficiency in English, with excellent written and verbal communication skills.
9. Prior experience leading teams or projects is highly desirable.
Preferred Skills:
1. Experience in oncology, immunology, respiratory, infectious diseases, or neurosciences is a plus.
2. Knowledge of other statistical software such as R or Python is a plus.
3. Knowledge of regulatory guidelines (FDA/EMA/ICH) is preferred.
Below are the High-level Job Description:
- New policy and use cases creation
- 24/7 Administration and management of in scoped security devices, dedicated support to implementing the changes (additions, modifications and deletions) for Security Devices Configurations as per the organization change management policy or process
- Participation in Change / Incident / Problem Management calls to comprehend, assess user-specific requirements and take appropriate actions if any
- Development of a comprehensive Plan of Action (POA) inclusive of a rollback strategy as a precautionary measure for any change implementation, migration and upgrade
- Precise execution of scheduled configuration changes within the specified timeframe as per the organization policy or process.
- Sanity should be performed thoroughly after any changes, modification, upgrade, migration and rollback. It should be documented along with appropriate artefacts.
- Periodic review of configuration, agent reconciliation, architecture and infrastructure to ensure necessary compliance, prevent any interruption, automation opportunities and optimization.
- Conducting a thorough review release notes for both known, unknown issues and limitations pertaining to current and proposed releases.
- Offering insightful suggestions or resolutions should advisories align with the infrastructure.
- Post-upgrade, meticulously monitoring the Security Devices, validating application traffic, conducting essential testing, and documenting all artefacts generated
- Document open issues / limitations / bugs of the new security solution to circulate across all concern teams before completion of the handover.
- Execute routine device configuration and policy database updates adhering to InfoSec guidelines on a daily, weekly, and monthly basis as required.
- Prepare appropriate report along with PoA and discuss with concern stakeholders to ensure timely implementation of the automation or optimizations.
Join us to reimagine how businesses integrate data and automate processes – with AI at the core.
About FloData
FloData is re-imagining the iPaaS and Business Process Automation (BPA) space for a new era - one where business teams, not just IT, can integrate data, run automations, and solve ops bottlenecks using intuitive, AI-driven interfaces. We're a small, hands-on team with a deep technical foundation and strong industry connections. Backed by real-world learnings from our earlier platform version, we're now going all-in on building a generative AI-first experience.
The Opportunity
We’re looking for an GenAI Engineer to help build the intelligence layer of our new platform. From designing LLM-powered orchestration flows with LangGraph to building frameworks for evaluation and monitoring with LangSmith, you’ll shape how AI powers real-world enterprise workflows.
If you thrive on working at the frontier of LLM systems engineering, enjoy scaling prototypes into production-grade systems, and want to make AI reliable, explainable, and enterprise-ready - this is your chance to define a category-defining product.
What You'll Do
- Spend ~70% of your time architecting, prototyping, and productionizing AI systems (LLM orchestration, agents, evaluation, observability)
- Develop AI frameworks: orchestration (LangGraph), evaluation/monitoring (LangSmith), vector/graph DBs, and other GenAI infra
- Work with product engineers to seamlessly integrate AI services into frontend and backend workflows
- Build systems for AI evaluation, monitoring, and reliability to ensure trustworthy performance at scale
- Translate product needs into AI-first solutions, balancing rapid prototyping with enterprise-grade robustness
- Stay ahead of the curve by exploring emerging GenAI frameworks, tools, and research for practical application
Must Have
- 3–5 years of engineering experience, with at least 1-2 years in GenAI systems
- Hands-on experience with LangGraph, LangSmith, LangChain, or similar frameworks for orchestration/evaluation
- Deep understanding of LLM workflows: prompt engineering, fine-tuning, RAG, evaluation, monitoring, and observability
- A strong product mindset—comfortable bridging research-level concepts with production-ready business use cases
- Startup mindset: resourceful, pragmatic, and outcome-driven
Good To Have
- Experience integrating AI pipelines with enterprise applications and hybrid infra setups (AWS, on-prem, VPCs)
- Experience building AI-native user experiences (assistants, copilots, intelligent automation flows)
- Familiarity with enterprise SaaS/IT ecosystems (Salesforce, Oracle ERP, Netsuite, etc.)
Why Join Us
- Own the AI backbone of a generational product at the intersection of AI, automation, and enterprise data
- Work closely with founders and leadership with no layers of bureaucracy
- End-to-end ownership of AI systems you design and ship
- Be a thought partner in setting AI-first principles for both tech and culture
- Onsite in Hyderabad, with flexibility when needed
Sounds like you?
We'd love to talk. Apply now or reach out directly to explore this opportunity.
We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).
Roles and Responsibilities:
- Acquire and activate travel agents through various channels like office visits, telephone, email, social networks, events & meetings.
- Establish, grow, and maintain commercial ties with travel agents—both current and potential—in the designated territory to attract new partners for our services.
- Create a sales funnel, identify the right leads, define long-term goals, and engage in strategic business planning.
- Align sales activities with tele-sales team, product, marketing, and customer success team.
- Build in-depth understand of our products and give demos or solve product queries to convert agents.
- Attend road shows, trade fairs and other travel industry events.
- Maintain and analyse data.
- Stay updated on industry insights and opportunities for strategic planning and growth.
Experience and Skills you MUST have:
- At least 4 years sales experience in the travel industry
- Excellent communication and interpersonal skills
- Ability to build relationships
- Prior experience in acquiring travel agents or in visas or insurance is a big plus
- Proficient with using CRMs and google sheets
- Ability to handle pressure
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Job Description:
§ Understanding client requirements & functional specifications.
§ Developing and maintaining dynamic websites and web applications.
§ Ensuring fool proof performance of the deliverable.
§ Coordinating with co-developers and other related departments.
§ Sending regular updated about the project status
Desired Candidate Profile:
· Proficient knowledge of a back-end programming language - one or more from Python, JavaScript, NodeJS
· Proficient knowledge of back-end server frameworks - Flask
· Proficient knowledge in handling any from JSON, XML & YAML
· Databases: MongoDB
· Proficient In configuring backend in Nginx Server
· Experience in building the API services from scratch - Project structuring, setting up environment objects, build reusable components, etc,
· IDE: Visual Studio Code, PyCharm Notepad++.
· Should have excellent written & oral communication skills (English)
· Must have the capacity to work independently and also as a part of a team
Key Responsibilities:
Responsible for ensuring the quality of Salesforce platform, through the creating of test plans, test cases, scenarios, scripts, and procedures for new features, bugs, and enhancements. Ensure the successful deployment of projects by defining and implementing end-to-end product test strategies.
Analyzes functional and technical requirements and translate them into manual test scenarios.
Communicate with AQA Engineers to make our e2e-scenarios automated
Participate in scrum team activities and be the voice of quality in your team
Collaborate with other teams, product office, support team and release managers.
Required Skills:
2+ years as a QA Engineer/Tester (Relevant experience including salesforce)
Knowledge of the Agile SDLC utilizing JIRA
Experience creating/updating and maintaining manual test cases, test plans, and test suites
Experience with test case management tools as well as maintaining test cases outside of tools
Familiar with bug tracking tools and creating bug reports
Experience troubleshooting and reproducing issues and defects
Experience testing web service APIs and using API testing tools such as Postman
Will be great to see:
Experience with Salesforce Sales Cloud or SF certification
Experience with SQL and query writing
Comfortable with automated testing tools (Selenium)
Experience with Java
Knowledge of Apex Jav
Automotive Embedded Developer
Experience: 3-8 Years
Location: Bangalore, Hyderabad, Chennai, Pune, Thiruvananthapuram, Calicut
Company: Tata Elxsi
Skills : Embedded C, CAN Protocol, Vector Tools, Aspice. Design and Development of ECU
Responsibilities for Data Engineer
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.







