Cutshort logo
SPSS Jobs in Hyderabad

11+ SPSS Jobs in Hyderabad | SPSS Job openings in Hyderabad

Apply to 11+ SPSS Jobs in Hyderabad on CutShort.io. Explore the latest SPSS Job opportunities across top companies like Google, Amazon & Adobe.

icon
Cadila Zydus healthcare pvt ltd
S d Colony secunderabad Hyderabad , Bengaluru (Bangalore)
0 - 1 yrs
₹7L - ₹10L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+2 more

Interpret data, analyze results using statistical techniques and provide ongoing reports

Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality

Acquire data from primary or secondary data sources and maintain databases/data systems

Identify, analyze, and interpret trends or patterns in complex data sets

Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems

Work with management to prioritize business and information needs

Locate and define new process improvement opportunities

Read more
Wallero technologies
Hyderabad
8 - 20 yrs
₹15L - ₹35L / yr
PySpark

Please find the below job specifications,

 

Position: . Data Engineer

Location: Hyderabad, Telangana, India

Job Type: Permanent (full-time)


Company Description:


We are a Seattle based product engineering, software development and technology services firm with offices in the U.S., Canada, Bulgaria, and India (Manjeera Trinity Corporate, JNTU-Hitech City Road, beside LULU Mall, Hyderabad) . Wallero is a Microsoft Gold partner company. Please find detailed overview About Wallero: https://wallero.com/aboutus/ and Wallero Culture: https://wallero.com/careers/


Job Description:


  • Tech stack: Python, Pyspark, Databricks.
  • Excellent in the Supply Chain domain.
  • Technical expert in the field with the ability to think out of the box.
  • Excellent communicator.
  • Work autonomously with minimal instructions from JNJ involvement.
  • Should be able to guide the team on the best practices (reusable, modularized coding, design patterns, and so on).


If you believe you have the skills and experience necessary for this role and are excited about contributing to our team, we would love to hear from you.


Thank you,

 

Manu Nakka

Lead Technical Recruiter

Read more
Wallero technologies
Hyderabad
7 - 15 yrs
₹20L - ₹28L / yr
SQL
Data modeling
ADF
PowerBI
  1. Strong communication skills are essential, as the selected candidate will be responsible for leading a team of two in the future.
  2. Proficiency in SQL.
  3. Expertise in Data Modelling.
  4. Experience with Azure Data Factory (ADF).
  5. Competence in Power BI.
  6. SQL – Should be strong in Data Modeling , Tables Design and SQL Queries.
  7. ADF – Must have hands-on experience in ADF pipelines and its set-up from End-to-End in Azure including subscriptions, IR and Resource Group creations.
  8. Power BI – Hands-on knowledge in Power BI reports including documentation and follow existing standards.


Read more
Softobiz Technologies Private limited

at Softobiz Technologies Private limited

2 candid answers
1 recruiter
Adlin Asha
Posted by Adlin Asha
Hyderabad
8 - 18 yrs
₹15L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Redshift
skill iconPostgreSQL
+2 more

Experience: 8+ Years

Work Location: Hyderabad

Mode of work: Work from Office


Senior Data Engineer / Architect

 

Summary of the Role

 

The Senior Data Engineer / Architect will be a key role within the data and technology team, responsible for engineering and building data solutions that enable seamless use of data within the organization. 

 

Core Activities

-         Work closely with the business teams and business analysts to understand and document data usage requirements

-         Develop designs relating to data engineering solutions including data pipelines, ETL, data warehouse, data mart and data lake solutions

-         Develop data designs for reporting and other data use requirements

-         Develop data governance solutions that provide data governance services including data security, data quality, data lineage etc.

-         Lead implementation of data use and data quality solutions

-         Provide operational support for users for the implemented data solutions

-         Support development of solutions that automate reporting and business intelligence requirements

-         Support development of machine learning and AI solution using large scale internal and external datasets

 

Other activities

-         Work on and manage technology projects as and when required

-         Provide user and technical training on data solutions

 

Skills and Experience

-         At least 5-8 years of experience in a senior data engineer / architect role

-         Strong experience with AWS based data solutions including AWS Redshift, analytics and data governance solutions

-         Strong experience with industry standard data governance / data quality solutions  

-         Strong experience with managing a Postgres SQL data environment

-         Background as a software developer working in AWS / Python will be beneficial

-         Experience with BI tools like Power BI and Tableau

Strong written and oral communication skills

 

Read more
Product and Service based company
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Data Semantics
Deepu Vijayan
Posted by Deepu Vijayan
Remote, Hyderabad, Bengaluru (Bangalore)
4 - 15 yrs
₹3L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL Server Analysis Services (SSAS)
SQL Server Reporting Services (SSRS)
+4 more

It's regarding a permanent opening with Data Semantics

Data Semantics 


We are Product base company and Microsoft Gold Partner

Data Semantics is an award-winning Data Science company with a vision to empower every organization to harness the full potential of its data assets. In order to achieve this, we provide Artificial Intelligence, Big Data and Data Warehousing solutions to enterprises across the globe. Data Semantics was listed as one of the top 20 Analytics companies by Silicon India 2018 & CIO Review India 2014 as one of the Top 20 BI companies.  We are headquartered in Bangalore, India with our offices in 6 global locations including USA United Kingdom, Canada, United Arab Emirates (Dubai Abu Dhabi), and Mumbai. Our mission is to enable our people to learn the art of data management and visualization to help our customers make quick and smart decisions. 

 

Our Services include: 

Business Intelligence & Visualization

App and Data Modernization

Low Code Application Development

Artificial Intelligence

Internet of Things

Data Warehouse Modernization

Robotic Process Automation

Advanced Analytics

 

Our Products:

Sirius – World’s most agile conversational AI platform

Serina

Conversational Analytics

Contactless Attendance Management System

 

 

Company URL:   https://datasemantics.co 


JD:

MSBI

SSAS

SSRS

SSIS

Datawarehousing

SQL

Read more
OSBIndia Private Limited
Bengaluru (Bangalore), Hyderabad
5 - 12 yrs
₹10L - ₹18L / yr
Data Warehouse (DWH)
Informatica
ETL
SQL
Stored Procedures
+3 more

1.      Core Responsibilities

·        Leading solutions for data engineering

·        Maintain the integrity of both the design and the data that is held within the architecture

·        Champion and educate people in the development and use of data engineering best practises

·        Support the Head of Data Engineering and lead by example

·        Contribute to the development of database management services and associated processes relating to the delivery of data solutions

·        Provide requirements analysis, documentation, development, delivery and maintenance of data platforms.

·        Develop database requirements in a structured and logical manner ensuring delivery is aligned with business prioritisation and best practise

·        Design and deliver performance enhancements, application migration processes and version upgrades across a pipeline of BI environments.

·        Provide support for the scoping and delivery of BI capability to internal users.

·        Identify risks and issues and escalate to Line / Project manager. 

·        Work with clients, existing asset owners & their service providers and non BI development staff to clarify and deliver work stream objectives in timescales that deliver to the overall project expectations.

·        Develop and maintain documentation in support of all BI processes.

·        Proactively identify cost-justifiable improvements to data manipulation processes.

·        Research and promote relevant BI tools and processes that contribute to increased efficiency and capability in support of corporate objectives.

·        Promote a culture that embraces change, continuous improvement and a ‘can do’ attitude.

·        Demonstrate enthusiasm and self-motivation at all times.

·        Establish effective working relationships with other internal teams to drive improved efficiency and effective processes.

·        Be a champion for high quality data and use of strategic data repositories, associated relational model, and Data Warehouse for optimising the delivery of accurate, consistent and reliable business intelligence

·        Ensure that you fully understand and comply with the organisation’s Risk Management Policies as they relate to your area of responsibility and demonstrate in your day to day work that you put customers at the heart of everything you do.

·        Ensure that you fully understand and comply with the organisation’s Data Governance Policies as they relate to your area of responsibility and demonstrate in your day to day work that you treat data as an important corporate asset which must be protected and managed.

·        Maintain the company’s compliance standards and ensure timely completion of all mandatory on-line training modules and attestations.

 

2.      Experience Requirements

·        5 years Data Engineering / ETL development experience is essential

·        5 years data design experience in an MI / BI / Analytics environment (Kimball, lake house, data lake) is essential

·        5 years experience of working in a structured Change Management project lifecycle is essential

·        Experience of working in a financial services environment is desirable

·        Experience of dealing with senior management within a large organisation is desirable

·        5 years experience of developing in conjunction with large complex projects and programmes is desirable

·        Experience mentoring other members of the team on best practise and internal standards is essential

·        Experience with cloud data platforms desirable (Microsoft Azure) is desirable

 

3.      Knowledge Requirements

·        A strong knowledge of business intelligence solutions and an ability to translate this into data solutions for the broader business is essential

·        Strong demonstrable knowledge of data warehouse methodologies

·        Robust understanding of high level business processes is essential

·        Understanding of data migration, including reconciliation, data cleanse and cutover is desirable

Read more
Hyderabad
3 - 7 yrs
₹1L - ₹15L / yr
Big Data
Spark
Hadoop
PySpark
skill iconAmazon Web Services (AWS)
+3 more

Big data Developer

Exp: 3yrs to 7 yrs.
Job Location: Hyderabad
Notice: Immediate / within 30 days

1. Expertise in building AWS Data Engineering pipelines with AWS Glue -> Athena -> Quick sight
2. Experience in developing lambda functions with AWS Lambda
3. Expertise with Spark/PySpark Candidate should be hands on with PySpark code and should be able to do transformations with Spark
4. Should be able to code in Python and Scala.
5. Snowflake experience will be a plus

We can start keeping Hadoop and Hive requirements as good to have or understanding of is enough rather than keeping it as a desirable requirement.

Read more
InnovAccer

at InnovAccer

3 recruiters
Jyoti Kaushik
Posted by Jyoti Kaushik
Noida, Bengaluru (Bangalore), Pune, Hyderabad
4 - 7 yrs
₹4L - ₹16L / yr
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
+2 more

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more
MNC

at MNC

Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore), Hyderabad
3 - 6 yrs
₹10L - ₹15L / yr
Big Data
Spark
ETL
Apache
Hadoop
+2 more
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
1CH

at 1CH

1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort