Cutshort logo
Joint application design jobs

11+ Joint application design Jobs in India

Apply to 11+ Joint application design Jobs on CutShort.io. Find your next job, effortlessly. Browse Joint application design Jobs and apply today!

icon
Namaste Credit

at Namaste Credit

9 recruiters
Abhishek J Shiri
Posted by Abhishek J Shiri
Remote, Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹10L / yr
analyst
analysis
DA
SQL
Joint application design
+3 more

About the Role

The following Job Specification is intended to reflect the nature, range and context of the work. It identifies the main requirements of the job but is not an exhaustive list of duties.

 

The Data Analyst role is a vital role as to be a sole point of contact responsible for closely working with various teams internally and closely working with the CTO.  This role typically requires the resource to analyse the vast number of datasets and provide a meaningful impact and the result of such data with combination of growth of the company to the management. The role handles large responsibilities of closely communicating with other departments, thus must be someone who really loves his role.

 

Job Description

  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Develop and implement, data analytics and other strategies that optimize statistical efficiency and quality
  • Strong experience at working on databases and good at analytical reasoning
  • Acquire data from Finance, Technology, Marketing and Sales data sources and maintain data systems
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Should be able to highlight Incorrect data and record the pattern of Drawbacks relating to the cleansing of data modules
  • Must be involved in reconciliation of data sets and components across verticals depending on the need
  • Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
  • Work with the management to prioritize business and information needs
  • Locate and define new process improvement opportunities

 

 

Required Skills:

 

  • Proven working experience as a data analyst or business data analyst
  • Sound experience in MySQL, Postgres, Oracle databases and fluent in SQL scripts
  • https://resources.workable.com/data-scientist-analysis-interview-questions">Technical expertiseregarding data models, database design development, data mining and segmentation techniques
  • Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc), programming (XML, Javascript, or ETL frameworks)
  • Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc)
  • Strong https://resources.workable.com/analytical-skills-interview-questions">analytical skillswith the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Adept at queries, report writing and presenting findings
  • BS in Mathematics, Economics, Computer Science, Information Management or Statistics

 

Read more
NeoSoft Technologies (A CMMi Level 5 Organization)
Mumbai, Navi Mumbai
3 - 6 yrs
₹6L - ₹12L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+3 more
  1. Minimum 2.5 years of experience as a Python Developer.
  2. Minimum 2.5 years of experience in any framework like Django/Flask/Fast API
  3. Minimum 2.5 years of experience in SQL/ Postgress
  4. Minimum 2.5 years of experience in Git/Gitlab/Bit-Bucket
  5. Minimum 2+ years of experience in deployment (CICD with Jenkins)
  6. Minimum 2.5 years of experience in any cloud like AWS/GCP/Azure
Read more
Data Semantics
Deepu Vijayan
Posted by Deepu Vijayan
Remote, Hyderabad, Bengaluru (Bangalore)
4 - 15 yrs
₹3L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
SQL Server Analysis Services (SSAS)
SQL Server Reporting Services (SSRS)
+4 more

It's regarding a permanent opening with Data Semantics

Data Semantics 


We are Product base company and Microsoft Gold Partner

Data Semantics is an award-winning Data Science company with a vision to empower every organization to harness the full potential of its data assets. In order to achieve this, we provide Artificial Intelligence, Big Data and Data Warehousing solutions to enterprises across the globe. Data Semantics was listed as one of the top 20 Analytics companies by Silicon India 2018 & CIO Review India 2014 as one of the Top 20 BI companies.  We are headquartered in Bangalore, India with our offices in 6 global locations including USA United Kingdom, Canada, United Arab Emirates (Dubai Abu Dhabi), and Mumbai. Our mission is to enable our people to learn the art of data management and visualization to help our customers make quick and smart decisions. 

 

Our Services include: 

Business Intelligence & Visualization

App and Data Modernization

Low Code Application Development

Artificial Intelligence

Internet of Things

Data Warehouse Modernization

Robotic Process Automation

Advanced Analytics

 

Our Products:

Sirius – World’s most agile conversational AI platform

Serina

Conversational Analytics

Contactless Attendance Management System

 

 

Company URL:   https://datasemantics.co 


JD:

MSBI

SSAS

SSRS

SSIS

Datawarehousing

SQL

Read more
vThink Global Technologies
Balasubramanian Ramaiyar
Posted by Balasubramanian Ramaiyar
Chennai
4 - 7 yrs
₹8L - ₹15L / yr
SQL
ETL
Informatica
Data Warehouse (DWH)
Stored Procedures
+1 more
We are looking for a strong SQL Developer well versed and hands-on in SQL, Stored Procedures, Joins and ETL. A data-savvy individual with advanced SQL skills.
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Minakshi Kumari
Posted by Minakshi Kumari
Remote only
1 - 5 yrs
₹10L - ₹15L / yr
SQL
PySpark
Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Astegic

at Astegic

3 recruiters
Nikita Pasricha
Posted by Nikita Pasricha
Remote only
5 - 7 yrs
₹8L - ₹15L / yr
Data engineering
SQL
Relational Database (RDBMS)
Big Data
skill iconScala
+14 more

WHAT YOU WILL DO:

  • ●  Create and maintain optimal data pipeline architecture.

  • ●  Assemble large, complex data sets that meet functional / non-functional business requirements.

  • ●  Identify, design, and implement internal process improvements: automating manual processes,

    optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • ●  Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide

    variety of data sources using Spark,Hadoop and AWS 'big data' technologies.(EC2, EMR, S3, Athena).

  • ●  Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition,

    operational efficiency and other key business performance metrics.

  • ●  Work with stakeholders including the Executive, Product, Data and Design teams to assist with

    data-related technical issues and support their data infrastructure needs.

  • ●  Keep our data separated and secure across national boundaries through multiple data centers and AWS

    regions.

  • ●  Create data tools for analytics and data scientist team members that assist them in building and

    optimizing our product into an innovative industry leader.

  • ●  Work with data and analytics experts to strive for greater functionality in our data systems.

    REQUIRED SKILLS & QUALIFICATIONS:

  • ●  5+ years of experience in a Data Engineer role.

  • ●  Advanced working SQL knowledge and experience working with relational databases, query authoring

    (SQL) as well as working familiarity with a variety of databases.

  • ●  Experience building and optimizing 'big data' data pipelines, architectures and data sets.

  • ●  Experience performing root cause analysis on internal and external data and processes to answer

    specific business questions and identify opportunities for improvement.

  • ●  Strong analytic skills related to working with unstructured datasets.

  • ●  Build processes supporting data transformation, data structures, metadata, dependency and workload

    management.

  • ●  A successful history of manipulating, processing and extracting value from large disconnected datasets.

  • ●  Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

  • ●  Strong project management and organizational skills.

  • ●  Experience supporting and working with cross-functional teams in a dynamic environment

  • ●  Experience with big data tools: Hadoop, Spark, Pig, Vetica, etc.

  • ●  Experience with AWS cloud services: EC2, EMR, S3, Athena

  • ●  Experience with Linux

  • ●  Experience with object-oriented/object function scripting languages: Python, Java, Shell, Scala, etc.


    PREFERRED SKILLS & QUALIFICATIONS:

● Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.

Read more
Analytics Consulting Company | REMOTE
Remote, Bengaluru (Bangalore)
2 - 4 yrs
₹18L - ₹20L / yr
skill iconData Science
skill iconR Programming
skill iconPython
skill iconMongoDB
SQL
+3 more
Do you want your software skills to contribute meaningfully into finding technology driven solutions for various businesses and alongside grow your career, then read on.
 
Our client provides data-based process optimization and analytics solutions to businesses worldwide. Their innovative algorithms and customized IT solutions cater to complex problems related to every field or industry, through tools that are non standard and are backed-up by extensive research. They serve startups as well as large, medium and small enterprises, a majority of their clients being industry leaders.
 
With registered offices in India, USA and UAE, their projects support various sectors and functions like logistics, IT, Retail, Ecommerce, Healthcare industry among others, across Asia, America and Europe. The founder holds a Master’s degree from IIT and a PhD in Operations Research from USA, with rich experience in Optimization and Analytics for various industries. His team of top scientists and pedagogy experts are focusing on innovative revenue generation ideas with minimum operational costs.
 
As a Data Scientist, you will apply expertise in machine-learning, data mining and statistical methods to design, prototype, and build the next-generation analytics engines and services.
 
What you will do:
  • Conducting advanced statistical analysis to provide actionable insights, identify trends, and measure performance
  • Performing data exploration, cleaning, preparation and feature engineering; in addition to executing tasks such as building a POC, validation/ AB testing
  • Collaborating with data engineers & architects to implement and deploy scalable solutions
  • Communicating results to diverse audiences with effective writing and visualizations
  • Identifying and executing on high impact projects, triage external requests, and ensure timely completion for the results to be useful
  • Providing thought leadership by researching best practices, conducting experiments, and collaborating with industry leaders

 

 

What you need to have:
  • 2-4 year experience in machine learning algorithms, predictive analytics, demand forecasting in real-world projects
  • Strong statistical background in descriptive and inferential statistics, regression, forecasting techniques.
  • Strong Programming background in Python (including packages like Tensorflow), R, D3.js , Tableau, Spark, SQL, MongoDB.
  • Preferred exposure to Optimization & Meta-heuristic algorithm and related applications
  • Background in a highly quantitative field like Data Science, Computer Science, Statistics, Applied Mathematics,Operations Research, Industrial Engineering, or similar fields.
  • Should have 2-4 years of experience in Data Science algorithm design and implementation, data analysis in different applied problems.
  • DS Mandatory skills : Python, R, SQL, Deep learning, predictive analysis, applied statistics
Read more
Nascentvision

at Nascentvision

1 recruiter
Shanu Mohan
Posted by Shanu Mohan
Gurugram, Mumbai, Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹17L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
Spark
skill iconScala
+2 more
  • Hands-on experience in any Cloud Platform
· Versed in Spark, Scala/python, SQL
  • Microsoft Azure Experience
· Experience working on Real Time Data Processing Pipeline
Read more
Mumbai
5 - 7 yrs
₹20L - ₹25L / yr
ETL
Talend
OLAP
Data governance
SQL
+8 more
  • Key responsibility is to design and develop a data pipeline including the architecture, prototyping, and development of data extraction, transformation/processing, cleansing/standardizing, and loading in Data Warehouse at real-time/near the real-time frequency. Source data can be structured, semi-structured, and/or unstructured format.
  • Provide technical expertise to design efficient data ingestion solutions to consolidate data from RDBMS, APIs, Messaging queues, weblogs, images, audios, documents, etc of  Enterprise Applications, SAAS applications, external 3rd party sites or APIs, etc through ETL/ELT, API integrations, Change Data Capture, Robotic Process Automation, Custom Python/Java Coding, etc
  • Development of complex data transformation using Talend (BigData edition), Python/Java transformation in Talend, SQL/Python/Java UDXs, AWS S3, etc to load in OLAP Data Warehouse in Structured/Semi-structured form
  • Development of data model and creating transformation logic to populate models for faster data consumption with simple SQL.
  • Implementing automated Audit & Quality assurance checks in Data Pipeline
  • Document & maintain data lineage to enable data governance
  • Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc

Requirements

  • Programming experience using Python / Java, to create functions / UDX
  • Extensive technical experience with SQL on RDBMS (Oracle/MySQL/Postgresql etc) including code optimization techniques
  • Strong ETL/ELT skillset using Talend BigData Edition. Experience in Talend CDC & MDM functionality will be an advantage.
  • Experience & expertise in implementing complex data pipelines, including semi-structured & unstructured data processing
  • Expertise to design efficient data ingestion solutions to consolidate data from RDBMS, APIs, Messaging queues, weblogs, images, audios, documents, etc of  Enterprise Applications, SAAS applications, external 3rd party sites or APIs, etc through ETL/ELT, API integrations, Change Data Capture, Robotic Process Automation, Custom Python/Java Coding, etc
  • Good understanding & working experience in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) and cloud-native Data Lake (S3, ADLS, BigQuery, etc) solutions
  • Familiarity with AWS tool stack for Storage & Processing. Able to recommend the right tools/solutions available to address a technical problem
  • Good knowledge of database performance and tuning, troubleshooting, query optimization, and tuning
  • Good analytical skills with the ability to synthesize data to design and deliver meaningful information
  • Good knowledge of Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model
  • Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
  • Ability to understand business functionality, processes, and flows
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently

Functional knowledge

  • Data Governance & Quality Assurance
  • Distributed computing
  • Linux
  • Data structures and algorithm
  • Unstructured Data Processing
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune
2 - 5 yrs
₹1L - ₹8L / yr
Google Cloud Platform (GCP)
Big Query
Workflow
Integration
SQL
Job Title/Designation: GCP Engineer - Big Query, Dataflow
Employment Type: Full Time, Permanent

Job Description:

Experience - 2 to 5 Years
Work Location - Pune

Mandatory Skills:
 
  • Sound understanding of Google Cloud Platform
  • Should have worked on Big Query, Workflow or Composer
  • Experience of migrating to GCP and integration projects on large-scale environments
  • ETL technical design, development and support
  • Good in SQL skills and Unix Scripting
  • Programming experience with Python, Java or Spark would be desirable, but not essential
  • Good Communication skills .
  • Experience of SOA and services-based data solutions, would be advantageous
 
Read more
Infogain
Agency job
via Technogen India PvtLtd by RAHUL BATTA
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
7 - 10 yrs
₹20L - ₹25L / yr
Data engineering
skill iconPython
SQL
Spark
PySpark
+10 more
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort