Cutshort logo

11+ ER/Studio Jobs in India

Apply to 11+ ER/Studio Jobs on CutShort.io. Find your next job, effortlessly. Browse ER/Studio Jobs and apply today!

icon
a global provider of Business Process Management company

a global provider of Business Process Management company

Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
4 - 10 yrs
₹15L - ₹22L / yr
SQL Azure
ADF
Business process management
Windows Azure
SQL
+12 more

Desired Competencies:

 

Ø  Expertise in Azure Data Factory V2

Ø  Expertise in other Azure components like Data lake Store, SQL Database, Databricks

Ø  Must have working knowledge of spark programming

Ø  Good exposure to Data Projects dealing with Data Design and Source to Target documentation including defining transformation rules

Ø  Strong knowledge of CICD Process

Ø  Experience in building power BI reports

Ø  Understanding of different components like Pipelines, activities, datasets & linked services

Ø  Exposure to dynamic configuration of pipelines using data sets and linked Services

Ø  Experience in designing, developing and deploying pipelines to higher environments

Ø  Good knowledge on File formats for flexible usage, File location Objects (SFTP, FTP, local, HDFS, ADLS, BLOB, Amazon S3 etc.)

Ø  Strong knowledge in SQL queries

Ø  Must have worked in full life-cycle development from functional design to deployment

Ø  Should have working knowledge of GIT, SVN

Ø  Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.

Ø  Should have working knowledge of different resources available in Azure like Storage Account, Synapse, Azure SQL Server, Azure Data Bricks, Azure Purview

Ø  Any experience related to metadata management, data modelling, and related tools (Erwin or ER Studio or others) would be preferred

 

Preferred Qualifications:

Ø  Bachelor's degree in Computer Science or Technology

Ø  Proven success in contributing to a team-oriented environment

Ø  Proven ability to work creatively and analytically in a problem-solving environment

Ø  Excellent communication (written and oral) and interpersonal skills

Qualifications

BE/BTECH

KEY RESPONSIBILITIES :

You will join a team designing and building a data warehouse covering both relational and dimensional models, developing reports, data marts and other extracts and delivering these via SSIS, SSRS, SSAS, and PowerBI. It is seen as playing a vital role in delivering a single version of the truth on Client’s data and delivering MI & BI that will feature in enabling both operational and strategic decision making.

You will be able to take responsibility for projects over the entire software lifecycle and work with minimum supervision. This would include technical analysis, design, development, and test support as well as managing the delivery to production.

The initial project being resourced is around the development and implementation of a Data Warehouse and associated MI/BI functions.

 

Principal Activities:

1.       Interpret written business requirements documents

2.       Specify (High Level Design and Tech Spec), code and write automated unit tests for new aspects of MI/BI Service.

3.       Write clear and concise supporting documentation for deliverable items.

4.       Become a member of the skilled development team willing to contribute and share experiences and learn as appropriate.

5.       Review and contribute to requirements documentation.

6.       Provide third line support for internally developed software.

7.       Create and maintain continuous deployment pipelines.

8.       Help maintain Development Team standards and principles.

9.       Contribute and share learning and experiences with the greater Development team.

10.   Work within the company’s approved processes, including design and service transition.

11.   Collaborate with other teams and departments across the firm.

12.   Be willing to travel to other offices when required.
13.You agree to comply with any reasonable instructions or regulations issued by the Company from time to time including those set out in the terms of the dealing and other manuals, including staff handbooks and all other group policies


Location
– Bangalore

 

Read more
For a leading manufacturing company

For a leading manufacturing company

Agency job
Chennai
5 - 8 yrs
₹6L - ₹7L / yr
Relational Database (RDBMS)
NOSQL Databases
MySQL
MS SQLServer
SQL server
+13 more

Database Architect

5 - 6 Years

Good Knowledge in Relation and Non-Relational Database

To write Complex Queries and Identify problematic queries and provide a Solution

Good Hands on database tools

Experience in Both SQL and NON SQL Database like SQL Server, PostgreSQL, Mango DB, Maria DB. Etc.

Worked on Data Model Preparation & Structuring Database etc.

Read more
Matellio India Private Limited
Harshit Sharma
Posted by Harshit Sharma
Remote only
8 - 15 yrs
₹10L - ₹27L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+7 more

Responsibilities include: 

  • Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
  • Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
  • Build data ingestion and data transformation infrastructure
  • Automate infrastructure that the data science team uses
  • Perform statistical analysis and tune the results so that the organization can make better-informed decisions
  • Set up and manage AI development and product infrastructure
  • Be a good team player, as coordinating with others is a must
Read more
Product and Service based company

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
SmartHub Innovation Pvt Ltd
Sathya Venkatesh
Posted by Sathya Venkatesh
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

JD Code: SHI-LDE-01 

Version#: 1.0 

Date of JD Creation: 27-March-2023 

Position Title: Lead Data Engineer 

Reporting to: Technical Director 

Location: Bangalore Urban, India (on-site) 

 

SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our “Software Defined Edge” products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies. 

 

SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.  

Summary 

This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases. 

We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports. 

Responsibilities 

  • Provide technical & architectural leadership for the next generation of product development. 
  • Innovate, Research & Evaluate new technologies and tools for a quality output. 
  • Architect, Design and Implement ensuring scalability, performance and security. 
  • Code and implement new algorithms to solve complex problems. 
  • Analyze complex data, develop, optimize and transform large data sets both structured and unstructured. 
  • Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as Kubernetes  
  • Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding. 
  • Work in an Agile development environment while continuously evaluating and improvising engineering processes 

Required 

  • At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies. 
  • Should have been feature/component lead for several complex features involving large datasets. 
  • Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl. 
  • Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to have 
  • Excellent analytical and problem-solving skill sets. 
  • Experience in  for high throughput is highly desirable 
  • Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode. 
  • Demonstrated ability to provide technical leadership and mentoring to the team 


Read more
Kaleidofin

at Kaleidofin

3 recruiters
Poornima B
Posted by Poornima B
Chennai, Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
skill iconData Science
skill iconMachine Learning (ML)
skill iconPython
SQL
Natural Language Processing (NLP)
4+ year experience in advanced analytics, model building, statistical modeling,
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
Read more
Response Informatics

at Response Informatics

13 recruiters
Swagatika Sahoo
Posted by Swagatika Sahoo
Remote, Bengaluru (Bangalore)
5 - 10 yrs
₹5L - ₹25L / yr
MicroStrategy
SQL
Business Intelligence (BI)
Microstrategy. Having knowledge on Database like SQL. And also knowledge on BI concepts.Strong communication skills.
Read more
Scry AI

at Scry AI

1 recruiter
Siddarth Thakur
Posted by Siddarth Thakur
Remote only
3 - 8 yrs
₹15L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)

Salary: Competitive as per Industry Standard

We are expanding our Data Engineering Team and hiring passionate professionals with extensive

knowledge and experience in building and managing large enterprise data and analytics platforms. We

are looking for creative individuals with strong programming skills, who can understand complex

business and architectural problems and develop solutions. The individual will work closely with the rest

of our data engineering and data science team in implementing and managing Scalable Smart Data

Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale

Processing Clusters, Data Mining and Search Engines.

What You’ll Need:

  • 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal

Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied

data types.

  • Proficiency in Python, Linux and shell scripting.
  • Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
    ● Strong experience in developing the infrastructure required for data ingestion, optimal

extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory,  Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).

  • Working knowledge of github or other version control tools.
  • Experience with creating Restful web services and API platforms.
  • Work with data science and infrastructure team members to implement practical machine

learning solutions and pipelines in production.

  • Experience with cloud providers like Azure/AWS/GCP.
  • Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
  • Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
  • Strong analytic skills related to working with unstructured datasets.

 

Good to have (to filter or prioritize candidates)

  • Experience with testing libraries such as pytest for writing unit-tests for the developed code.
  • Knowledge of Machine Learning algorithms and libraries would be good to have,

implementation experience would be an added advantage.

  • Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
  • Knowledge of Azure functions , Elastic search etc will be good to have.

 

  • Having experience with model versioning (mlflow) and data versioning will be beneficial
  • Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Read more
MNC

MNC

Agency job
via Fragma Data Systems by Priyanka U
Remote, Bengaluru (Bangalore)
2 - 6 yrs
₹6L - ₹15L / yr
Spark
Apache Kafka
PySpark
Internet of Things (IOT)
Real time media streaming

JD for IOT DE:

 

The role requires experience in Azure core technologies – IoT Hub/ Event Hub, Stream Analytics, IoT Central, Azure Data Lake Storage, Azure Cosmos, Azure Data Factory, Azure SQL Database, Azure HDInsight / Databricks, SQL data warehouse.

 

You Have:

  • Minimum 2 years of software development experience
  • Minimum 2 years of experience in IoT/streaming data pipelines solution development
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing, and provisioning
  • Delivered data management projects with real-time/near real-time data insights delivery on Azure Cloud
  • Translated complex analytical requirements into the technical design including data models, ETLs, and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale IOT data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Experience in handling telemetry data with Spark Streaming, Kafka, Flink, Scala, Pyspark, Spark SQL.
  • Hands-on experience on containers and Dockers
  • Exposure to streaming protocols like MQTT and AMQP
  • Knowledge of OT network protocols like OPC UA, CAN Bus, and similar protocols
  • Strong knowledge of continuous integration, static code analysis, and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
     

Roles & Responsibilities
 

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core Azure services needed to fulfill the technical design
  • Design, Develop and Deliver data integration interfaces in ADF and Azure Databricks
  • Design, Develop and Deliver data provisioning interfaces to fulfill consumption needs
  • Deliver data models on Azure platform, it could be on Azure Cosmos, SQL DW / Synapse, or SQL
  • Advise clients on ML Engineering and deploying ML Ops at Scale on AKS
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting the right platform services and architecting the solution in a cost-effective manner
  • Deploy Azure DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

 

Read more
Product / Internet / Media  Companies

Product / Internet / Media Companies

Agency job
via Archelons Consulting by Meenu Singh
Bengaluru (Bangalore)
4 - 9 yrs
₹15L - ₹30L / yr
Big Data
Hadoop
Data processing
skill iconPython
Data engineering
+3 more

REQUIREMENT:

  •  Previous experience of working in large scale data engineering
  •  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
  •  Previous experience of architecting and designing backend for large scale data processing.
  •  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
  •  Hands-on and have the ability to contribute a key portion of data engineering backend.
  •  Self-inspired and motivated to drive for exceptional results.
  •  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
  •  Familiarity and experience working with different DB technologies and how to scale them.

RESPONSIBILITY:

  •  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
  •  Build data engineering workflow for large scale data processing.
  •  Discover opportunities in data acquisition.
  •  Bring industry best practices for data engineering workflow.
  •  Develop data set processes for data modelling, mining and production.
  •  Take additional tech responsibilities for driving an initiative to completion
  •  Recommend ways to improve data reliability, efficiency and quality
  •  Goes out of their way to reduce complexity.
  •  Humble and outgoing - engineering cheerleaders.
Read more
Bengaluru (Bangalore)
3 - 12 yrs
₹3L - ₹25L / yr
skill iconJava
skill iconPython
Spark
Hadoop
skill iconMongoDB
+3 more
We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - [email protected]. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort