Cutshort logo
Associate Manager - Database Development (PostgreSQL)
Sportz Interactive's logo

Associate Manager - Database Development (PostgreSQL)

at Sportz Interactive

DP
Posted by Nishita Dsouza
icon
Remote, Mumbai, Navi Mumbai, Pune, Nashik
icon
7 - 12 yrs
icon
₹15L - ₹16L / yr
icon
Full time
Skills
PostgreSQL
PL/SQL
Big Data
Optimization
Stored Procedures

Job Role : Associate Manager (Database Development)


Key Responsibilities:

  • Optimizing performances of many stored procedures, SQL queries to deliver big amounts of data under a few seconds.
  • Designing and developing numerous complex queries, views, functions, and stored procedures
  • to work seamlessly with the Application/Development team’s data needs.
  • Responsible for providing solutions to all data related needs to support existing and new
  • applications.
  • Creating scalable structures to cater to large user bases and manage high workloads
  • Responsible in every step from the beginning stages of the projects from requirement gathering to implementation and maintenance.
  • Developing custom stored procedures and packages to support new enhancement needs.
  • Working with multiple teams to design, develop and deliver early warning systems.
  • Reviewing query performance and optimizing code
  • Writing queries used for front-end applications
  • Designing and coding database tables to store the application data
  • Data modelling to visualize database structure
  • Working with application developers to create optimized queries
  • Maintaining database performance by troubleshooting problems.
  • Accomplishing platform upgrades and improvements by supervising system programming.
  • Securing database by developing policies, procedures, and controls.
  • Designing and managing deep statistical systems.

Desired Skills and Experience  :

  • 7+ years of experience in database development
  • Minimum 4+ years of experience in PostgreSQL is a must
  • Experience and in-depth knowledge in PL/SQL
  • Ability to come up with multiple possible ways of solving a problem and deciding on the most optimal approach for implementation that suits the work case the most
  • Have knowledge of Database Administration and have the ability and experience of using the CLI tools for administration
  • Experience in Big Data technologies is an added advantage
  • Secondary platforms: MS SQL 2005/2008, Oracle, MySQL
  • Ability to take ownership of tasks and flexibility to work individually or in team
  • Ability to communicate with teams and clients across time zones and global regions
  • Good communication and self-motivated
  • Should have the ability to work under pressure
  • Knowledge of NoSQL and Cloud Architecture will be an advantage
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort

About Sportz Interactive

Founded
2002
Type
Size
100-1000
Stage
Profitable
About
Sportz Interactive is a sports-focused digital media agency delivering best-in-class websites, mobile applications, fantasy gaming, content and social media management.
Read more
Photos
Connect with the team
icon
Sushant More
icon
Nishita Dsouza
Company social profiles
icon
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
Machine Learning (ML)
Data Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
at Panamax InfoTech Ltd.
2 recruiters
DP
Posted by Bhavani P
Remote only
3 - 12 yrs
₹3L - ₹8L / yr
Unix administration
PL/SQL
Java
SQL Query Analyzer
Shell Scripting
+1 more
DBMS & SQL
Concepts of RDBMS, Normalization techniques
Entity Relationship diagram/ ER-Model
Transaction, commit, rollback, ACID properties
Transaction log
Difference in behavior of the column if it is nullable
SQL Statements
Join Operations
DDL, DML, Data Modelling
Optimal Query writing - with Aggregate fn, Group By, having clause, Order by etc. Should be
hands on for scenario-based query Writing
Query optimizing technique, Indexing in depth
Understanding query plan
Batching
Locking schemes
Isolation levels
Concept of stored procedure, Cursor, trigger, View
Beginner level - PL/SQL - Procedure Function writing skill.
Spring JPA and Spring Data basics
Hibernate mappings

UNIX
Basic Concepts on Unix
Commonly used Unix Commands with their options
Combining Unix commands using Pipe Filter etc.
Vi Editor & its different modes
Basic level Scripting and basic knowledge on how to execute jar files from host
Files and directory permissions
Application based scenarios.
Read more
at codersbrain
1 recruiter
DP
Posted by Aishwarya Hire
Bengaluru (Bangalore)
4 - 6 yrs
₹8L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
  • Design the architecture of our big data platform
  • Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
  • Design and implement data stores that support the scalable processing and storage of our high-frequency data
  • Maintain our data pipeline
  • Customize and oversee integration tools, warehouses, databases, and analytical systems
  • Configure and provide availability for data-access tools used by all data scientists


Read more
Agency job
via Recruiting India by Moumita Santra
Chennai
10 - 19 yrs
₹12L - ₹40L / yr
Big Data
Apache Spark
Spark
PySpark
ETL
+1 more

Job Sector: IT, Software

Job Type: Permanent

Location: Chennai

Experience: 10 - 20 Years

Salary: 12 – 40 LPA

Education: Any Graduate

Notice Period: Immediate

Key Skills: Python, Spark, AWS, SQL, PySpark

Contact at triple eight two zero nine four two double seven

 

Job Description:

Requirements

  • Minimum 12 years experience
  • In depth understanding and knowledge on distributed computing with spark.
  • Deep understanding of Spark Architecture and internals
  • Proven experience in data ingestion, data integration and data analytics with spark, preferably PySpark.
  • Expertise in ETL processes, data warehousing and data lakes.
  • Hands on with python for Big data and analytics.
  • Hands on in agile scrum model is an added advantage.
  • Knowledge on CI/CD and orchestration tools is desirable.
  • AWS S3, Redshift, Lambda knowledge is preferred
Thanks
Read more
DP
Posted by Swati Sharma
Hyderabad
7 - 15 yrs
₹10L - ₹20L / yr
Data Warehouse (DWH)
Informatica
ETL
Data modeling
Dimensional modeling
+6 more

Job Summary

 

The purpose of this role is to rationalize the full stack BI architecture, including ETL, Dimensional Model and Semantic layer and establishment of an efficient CI/CD pipeline. This platform will support efficient production of BI reports and models, to deliver insights and decision support for specific business processes.

 

Responsibilities

 

  • Establish a clean dimensional model architecture, in a platform that is aligned with the BW BI strategy, in tune with the scale and complexity of the BV business
  • Supported by existing, in-house Application and IBM i DB2 experts, rationalize the existing DB2, SQL ETL code base and add components, to feed the chosen architecture
  • Establish a Data Warehouse, which may overlap with PBI Dataflows, to support Fact Table aggregations and Slowly Changing Dimensions along with basic shaping and cleaning
  • Rationalize the existing Power BI Workspaces to align with the chosen architecture and establish CI/CD pipelines, to support ongoing management of the model, with appropriate orchestration
  • Establish Master Data management to provide a self-serve user interface for managing off-system Master Data, to support BI
  • Rationalize the Semantic Layer in the Power BI model to provide a clean structure, with consistent naming conventions, in Business language, to support self-serve analysis

 

Ideal Candidates should have

 

  • Familiarity with best practice, Azure and PBI architectural principles
  • 4 to 5 years’ experience, Microsoft Certified preferable
  • Expert in Dimensional Modelling in an Azure/PBI environment
  • Comfortable with SDLC Tools for BI (Git, MS Devops, Pipelines)
  • Mature, personable, team player
  • Strong analytical and problem-solving skills
  • Strong oral and written communication skills
Read more
DP
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
at DataMetica
1 video
7 recruiters
DP
Posted by Sumangali Desai
Pune, Hyderabad
7 - 12 yrs
₹7L - ₹20L / yr
Apache Spark
Big Data
Spark
Scala
Hadoop
+3 more
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
Bengaluru (Bangalore)
5 - 14 yrs
₹12L - ₹24L / yr
Data migration
PostgreSQL
Oracle

Data Engineer - GCP - data migration/PostGres/ORACLE

 

Budget : 200,000 Rs/Month

Domain Knowledge : Banking

Contract Hire : 12 Months

No. of Open Position : 2 Nos

Roles and Responsibilities :

- This role will involve all aspects of data migration with respect to any microservice being migrated.

- Analyze the source schema for the service in question

- Skilled at using the migration design to produce an instance of any templates or artifacts required to migrate the data

- Deliver a working version of the migration design to migrate the data for the service in question

- Develop a suite of scripts that will allow technical verification - either automated (e.g. - rowcouts) or manual (e.g field comparisons)

- Assist in the triage and resolution of any data related defects

Mandatory skills :

- Previous experience of data migration

- Knowledge of PostGres and ORACLE

- Previous experience of Data Analysis

 

 

 
 
Read more
at WyngCommerce
3 recruiters
DP
Posted by Ankit Jain
Bengaluru (Bangalore)
4 - 7 yrs
₹18L - ₹25L / yr
Data Science
Demand forecasting
Optimization
WyngCommerce is building state of the art AI software for the Global Consumer Brands & Retailers to enable best-in-class customer experiences. Our vision is to democratise machine learning algorithms for our customers and help them realise dramatic improvements in speed, cost and flexibility. Backed by a clutch of prominent angel investors & having some of the category leaders in the retail industry as clients, we are looking to hire for our data science team. The data science team at WyngCommerce is on a mission to challenge the norms and re-imagine how retail business should be run across the world. As a Senior Data Scientist in the team, you will be driving and owning the thought leadership and impact on one of our core data science problems. You will work collaboratively with the founders, clients and engineering team to formulate complex problems, run Exploratory Data Analysis and test hypotheses, implement ML-based solutions and fine tune them with more data. This is a high impact role with goals that directly impact our business. Your Role & Responsibilities - - Lead and Own the Thought Process on one or more of our core Data Science problems e.g. Product Clustering, Intertemporal Optimization, etc. - Actively participate and challenge assumptions in translating ambiguous business problems into one or more ML/optimization problems - Implement data-driven solutions based on advanced ML and optimization algorithms to address business problems - Research, experiment, and innovate ML/statistical approaches in various application areas of interest and contribute to IP - Partner with engineering teams to build scalable, efficient, automated ML-based pipelines (training/evaluation/monitoring) - Deploy, maintain, and debug ML/decision models in production environment - Analyze and assess data to ensure high data quality and correctness of downstream processes - Define and own metrics on solution quality, data quality and stability of ML pipelines - Communicate results to stakeholders and present data/insights to participate in and drive decision making Desired Skills & Experiences - - Bachelors or Masters in a quantitative field from a top tier college. - Minimum of 3+ years experience in a data science role in a technology company - Solid mathematical background (especially in linear algebra, probability theory, optimization theory, decision theory, operations research) - Familiarity with theoretical aspects of common ML techniques (generalized linear models, ensembles, SVMs, clustering algos, graphical models, etc.), statistical tests/metrics, experiment design, and evaluation methodologies - Solid foundation in data structures, algorithms, and programming language theory - Demonstrable track record of dealing with ambiguity, prioritizing needs, bias for iterative learning, and delivering results in a dynamic environment with minimal guidance - Hands-on experience in at least one of the focus areas of WyngCommerce Data Science team: (a) Product Clustering, (b) Demand Forecasting, (c) Intertemporal Optimization, (d) Reinforcement Learning, (e) Transfer Learning - Good programming skills (fluent in Java/Python/SQL) with experience of using common ML toolkits (e.g., sklearn, tensor flow, keras, nltk) to build models for real world problems - Computational thinking and familiarity with practical application requirements (e.g., latency, memory, processing time) - Experience using Cloud-based ML platforms (e.g., AWS Sagemaker, Azure ML), Cloud-based data storage, and deploying ML models in product environment in collaboration with engineering teams - Excellent written and verbal communication skills for both technical and non-technical audiences - (Plus Point) Experience of applying ML / other techniques in the domain of supply chain - and particularly in retail - for inventory optimization, demand forecasting, assortment planning, and other such problems - (Nice to have) Research experience and publications in top ML/Data science conferences
Read more
at Chariot Tech
1 recruiter
DP
Posted by Raj Garg
NCR (Delhi | Gurgaon | Noida)
1 - 5 yrs
₹15L - ₹16L / yr
Machine Learning (ML)
Big Data
Data Science
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image
Subodh Popalwar
Software Engineer, Memorres
icon
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
Logos of company hiring on cutshort