Cutshort logo
SPSS Jobs in Hyderabad

11+ SPSS Jobs in Hyderabad | SPSS Job openings in Hyderabad

Apply to 11+ SPSS Jobs in Hyderabad on CutShort.io. Explore the latest SPSS Job opportunities across top companies like Google, Amazon & Adobe.

icon
Cadila Zydus healthcare pvt ltd
S d Colony secunderabad Hyderabad , Bengaluru (Bangalore)
0 - 1 yrs
₹7L - ₹10L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+2 more

Interpret data, analyze results using statistical techniques and provide ongoing reports

Develop and implement databases, data collection systems, data analytics and other strategies that optimize statistical efficiency and quality

Acquire data from primary or secondary data sources and maintain databases/data systems

Identify, analyze, and interpret trends or patterns in complex data sets

Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems

Work with management to prioritize business and information needs

Locate and define new process improvement opportunities

Read more
RandomTrees

at RandomTrees

1 recruiter
Amareswarreddt yaddula
Posted by Amareswarreddt yaddula
Hyderabad
5 - 16 yrs
₹1L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
SQL
+3 more

We are #hiring for AWS Data Engineer expert to join our team


Job Title: AWS Data Engineer

Experience: 5 Yrs to 10Yrs

Location: Remote

Notice: Immediate or Max 20 Days

Role: Permanent Role


Skillset: AWS, ETL, SQL, Python, Pyspark, Postgres DB, Dremio.


Job Description:

 Able to develop ETL jobs.

Able to help with data curation/cleanup, data transformation, and building ETL pipelines.

Strong Postgres DB exp and knowledge of Dremio data visualization/semantic layer between DB and the application is a plus.

Sql, Python, and Pyspark is a must.

Communication should be good





Read more
Monarch Tractors India
Hyderabad
2 - 8 yrs
Best in industry
skill iconMachine Learning (ML)
skill iconData Science
Algorithms
skill iconPython
skill iconC++
+10 more

Designation: Perception Engineer (3D) 

Experience: 0 years to 8 years 

Position Type: Full Time 

Position Location: Hyderabad 

Compensation: As Per Industry standards 

 

About Monarch: 

At Monarch, we’re leading the digital transformation of farming. Monarch Tractor augments both muscle and mind with fully loaded hardware, software, and service machinery that will spur future generations of farming technologies. 

With our farmer-first mentality, we are building a smart tractor that will enhance (not replace) the existing farm ecosystem, alleviate labor availability, and cost issues, and provide an avenue for competitive organic and beyond farming by providing mechanical solutions to replace harmful chemical solutions. Despite all the cutting-edge technology we will incorporate, our tractor will still plow, still, and haul better than any other tractor in its class. We have all the necessary ingredients to develop, build and scale the Monarch Tractor and digitally transform farming around the world. 

 

Description: 

We are looking for engineers to work on applied research problems related to perception in autonomous driving of electric tractors. The team works on classical and deep learning-based techniques for computer vision. Several problems like SFM, SLAM, 3D Image processing, multiple view geometry etc. Are being solved to deploy on resource constrained hardware. 

 

Technical Skills: 

  • Background in Linear Algebra, Probability and Statistics, graphical algorithms and optimization problems is necessary. 
  • Solid theoretical background in 3D computer vision, computational geometry, SLAM and robot perception is desired. Deep learning background is optional. 
  • Knowledge of some numerical algorithms or libraries among: Bayesian filters, SLAM, Eigen, Boost, g2o, PCL, Open3D, ICP. 
  • Experience in two view and multi-view geometry. 
  • Necessary Skills: Python, C++, Boost, Computer Vision, Robotics, OpenCV. 
  • Academic experience for freshers in Vision for Robotics is preferred.  
  • Experienced candidates in Robotics with no prior Deep Learning experience willing to apply their knowledge to vision problems are also encouraged to apply. 
  • Software development experience on low-power embedded platforms is a plus. 

 

Responsibilities: 

  • Understanding engineering principles and a clear understanding of data structures and algorithms. 
  • Ability to understand, optimize and debug imaging algorithms. 
  • Ability to drive a project from conception to completion, research papers to code with disciplined approach to software development on Linux platform. 
  • Demonstrate outstanding ability to perform innovative and significant research in the form of technical papers, thesis, or patents. 
  • Optimize runtime performance of designed models. 
  • Deploy models to production and monitor performance and debug inaccuracies and exceptions. 
  • Communicate and collaborate with team members in India and abroad for the fulfillment of your duties and organizational objectives. 
  • Thrive in a fast-paced environment and can own the project end to end with minimum hand holding. 
  • Learn & adapt to new technologies & skillsets. 
  • Work on projects independently with timely delivery & defect free approach. 
  • Thesis focusing on the above skill set may be given more preference. 

 

What you will get: 

At Monarch Tractor, you’ll play a key role on a capable, dedicated, high-performing team of rock stars. Our compensation package includes a competitive salary, excellent health benefits commensurate with the role you’ll play in our success.  

 

Read more
Ogive technology

at Ogive technology

3 recruiters
Ogive Technology
Posted by Ogive Technology
Hyderabad
0 - 2 yrs
₹4L - ₹6L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

We are hiring for software engineer Minimum 1 year experience engineering graduate from Mechanical/EEE/EC/CS stream

- Primary role will be helping our customers to assist in development requirement in image processing
- It will also involves development technical specifications and product description in image processing field.
You will get to work on new and disruptive technologies.


Key Skills:
Familiarization with basic Linux commands *Hands on experience in image processing application development based on Deep Neural Networks, open cv etc
*Experience in working with Python, R, Tensorflow and C/C++
Job Location : Hyderabad

Resumes to be sent to Ogive mail id
Read more
Technovert

at Technovert

12 recruiters
Dushyant Waghmare
Posted by Dushyant Waghmare
Hyderabad
5 - 8 yrs
₹12.5L - ₹24L / yr
ETL
Informatica
Data Warehouse (DWH)

Role: ODI Developer

Location: Hyderabad (Initially remote)

Experience: 5-8 Years

 

Technovert is not your typical IT services firm. We have to credit two of our successful products generating $2M+ in licensing/SaaS revenues which is rare in the industry.

We are Obsessed with our love for technology and the infinite possibilities it can create for making this world a better place. Our clients find us at our best when we are challenged with their toughest of problems and we love chasing the problems. It thrills us and motivates us to deliver more. Our global delivery model has earned the trust and reputation of being a partner of choice.

We have a strong heritage built on great people who put customers first and deliver exceptional results with no surprises - every time. We partner with you to understand the interconnection of user experience, business goals, and information technology. It's the optimal fusing of these three drivers that deliver.

 

Must have:

  • Experience with DWH Implementation experience, with experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc.
  • Responsible for creation of ELT maps, Migrations into different environments, Maintenance and Monitoring of the infrastructure, working with DBA's as well as creation of new reports to assist executive and managerial levels in analyzing the business needs to target the customers.
  • Should be able to implement reusability, parameterization, workflow design, etc.
  • Expertise in the Oracle ODI toolset and OAC & knowledge of ODI Master and work repository &data modeling and ETL design.
  • Used ODI Topology Manager to create connections to various technologies such as Oracle, SQL Server, Flat files, XML, etc.
  • Using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects.
  • Ability to design ETL unit test cases and debug ETL Mappings, expertise in developing Load Plans, Scheduling Jobs.
  • Integrate ODI with multiple Sources/targets.

 

Nice to have:

  • Exposure towards Oracle Cloud Infrastructure (OCI) is preferable.
  • Knowledge in Oracle Analytics Cloud to Explore data through visualizations, load, and model data.
  • Hands-on experience of ODI 12c would be an added advantage.

 

Qualification:

  • Overall 3+ years of experience in Oracle Data Integrator (ODI) and Oracle Data Integrator Cloud Service (ODICS).
  • Experience in designing and implementing the E-LT architecture that is required to build a data warehouse, including source-to-staging area, staging-to-target area, data transformations, and EL-T process flows.
  • Must be well versed and hands-on in using and customizing Knowledge Modules (KM) and experience of performance tuning of mappings.
  • Must be self-starting, have strong attention to detail and accuracy, and able to fill multiple roles within the Oracle environment.
  • Should be good with Oracle/SQL and should have a good understanding of DDL Deployments.

 

Read more
Product Development

Product Development

Agency job
via Purple Hirez by Aditya K
Hyderabad
12 - 20 yrs
₹15L - ₹50L / yr
Analytics
skill iconData Analytics
skill iconKubernetes
PySpark
skill iconPython
+1 more

Job Description

We are looking for an experienced engineer with superb technical skills. Primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions.

Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.

 

Skills

  • Bachelors/Masters/Phd in CS or equivalent industry experience
  • Demonstrated expertise of building and shipping cloud native applications
  • 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
  • Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • 5+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins

Responsibilities

  • Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
  • Create custom Operators for Kubernetes, Kubeflow
  • Develop data ingestion processes and ETLs
  • Assist in dev ops operations
  • Design and Implement APIs
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
RandomTrees

at RandomTrees

1 recruiter
HariPrasad Jonnalagadda
Posted by HariPrasad Jonnalagadda
Remote, Hyderabad, Bengaluru (Bangalore)
9 - 20 yrs
₹10L - ₹15L / yr
Natural Language Processing (NLP)
skill iconData Science
skill iconMachine Learning (ML)
Computer Vision
recommendation algorithm

Expert in Machine Learning (ML) & Natural Language Processing (NLP).

Expert in Python, Pytorch and Data Structures.

Experience in ML model life cycle (Data preparation, Model training and Testing and ML Ops).

Strong experience in NLP, NLU and NLU using transformers & deep learning.

Experience in federated learning is a plus

Experience with knowledge graphs and ontology.

Responsible for developing, enhancing, modifying, optimizing and/or maintaining applications, pipelines and codebase in order to enhance the overall solution.

Experience working with scalable, highly-interactive, high-performance systems/projects (ML).

Design, code, test, debug and document programs as well as support activities for the corporate systems architecture.

Working closely with business partners in defining requirements for ML applications and advancements of solution.

Engage in specifications in creating comprehensive technical documents.

Experience / Knowledge in designing enterprise grade system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns.

Experience in Test Driven Development & Agile methodologies.

Good communication skills - client facing environment.

Hunger for learning, self-starter with a drive to technically mentor cohort of developers. 16. Good to have working experience in Knowledge Graph based ML products development; and AWS/GCP based ML services.

Read more
Cervello

Cervello

Agency job
via StackNexus by suman kattella
Hyderabad
5 - 7 yrs
₹5L - ₹15L / yr
Data engineering
Data modeling
Data Warehouse (DWH)
SQL
Windows Azure
+3 more
Contract Jobs - Longterm for 1 year
 
Client - Cervello
Job Role - Data Engineer
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in  Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
Read more
Indium Software

at Indium Software

16 recruiters
Mohamed Aslam
Posted by Mohamed Aslam
Hyderabad
3 - 7 yrs
₹7L - ₹13L / yr
skill iconPython
Spark
SQL
PySpark
HiveQL
+2 more

Indium Software is a niche technology solutions company with deep expertise in Digital , QA and Gaming. Indium helps customers in their Digital Transformation journey through a gamut of solutions that enhance business value.

With over 1000+ associates globally, Indium operates through offices in the US, UK and India

Visit http://www.indiumsoftware.com">www.indiumsoftware.com to know more.

Job Title: Analytics Data Engineer

What will you do:
The Data Engineer must be an expert in SQL development further providing support to the Data and Analytics in database design, data flow and analysis activities. The position of the Data Engineer also plays a key role in the development and deployment of innovative big data platforms for advanced analytics and data processing. The Data Engineer defines and builds the data pipelines that will enable faster, better, data-informed decision-making within the business.

We ask:

Extensive Experience with SQL and strong ability to process and analyse complex data

The candidate should also have an ability to design, build, and maintain the business’s ETL pipeline and data warehouse The candidate will also demonstrate expertise in data modelling and query performance tuning on SQL Server
Proficiency with analytics experience, especially funnel analysis, and have worked on analytical tools like Mixpanel, Amplitude, Thoughtspot, Google Analytics, and similar tools.

Should work on tools and frameworks required for building efficient and scalable data pipelines
Excellent at communicating and articulating ideas and an ability to influence others as well as drive towards a better solution continuously.
Experience working in python, Hive queries, spark, pysaprk, sparkSQL, presto

  • Relate Metrics to product
  • Programmatic Thinking
  • Edge cases
  • Good Communication
  • Product functionality understanding

Perks & Benefits:
A dynamic, creative & intelligent team they will make you love being at work.
Autonomous and hands-on role to make an impact you will be joining at an exciting time of growth!

Flexible work hours and Attractive pay package and perks
An inclusive work environment that lets you work in the way that works best for you!

Read more
company logo
Agency job
via UpgradeHR by Sangita Deka
Hyderabad
6 - 10 yrs
₹10L - ₹15L / yr
Big Data
skill iconData Science
skill iconMachine Learning (ML)
skill iconR Programming
skill iconPython
+2 more
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
Read more
GEP Worldwide

at GEP Worldwide

3 recruiters
Archy Singh
Posted by Archy Singh
Navi Mumbai, Hyderabad
3 - 7 yrs
₹2L - ₹5L / yr
skill iconR Programming
skill iconPython
skill iconData Science
Primary Skills : - B.Tech/MS/PhD degree in Computer Science, Computer Engineering or related technical discipline with 3-4 years of industry experience in Data Science. - Proven experience of working on unstructured and textual data. Deep understanding and expertise of NLP techniques (POS tagging, NER, Semantic role labelling etc). - Experience working with some of the supervised/unsupervised learning ML models such as linear/logistic regression, clustering, support vector machines (SVM), neural networks, Random Forest, CRF, Bayesian models etc. The ideal candidate will have a wide coverage of the different methods/models, and an in depth knowledge of some. - Strong coding experience in Python, R and Apache Spark. Python Skills are mandatory. - Experience with NoSQL databases, such as MongoDB, Cassandra, HBase etc. - Experience of working with Elastic search is a plus. - Experience of working on Microsoft Azure is a plus although not mandatory. - Basic knowledge of Linux and related scripting like Bash/shell script. Role Description (Roles & Responsibilities) : - Candidate will research, design and implement state-of-the-art ML systems using predictive modelling, deep learning, natural language processing and other ML techniques to help meeting business objectives. - Candidate will work closely with the product development/Engineering team to develop solutions for complex business problems or product features. - Handle Big Data scale for training and deploying ML/NLP based business modules/chatbots.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort