Cutshort logo
DataMetica logo
SQL Developer
DataMetica's logo

SQL Developer

Nikita Aher's profile picture
Posted by Nikita Aher
2 - 6 yrs
₹3L - ₹15L / yr
Full time
Pune
Skills
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
Data Warehouse (DWH)
Big Data
Hadoop

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About DataMetica

Founded :
2013
Type
Size :
100-1000
Stage :
Profitable
About
As a global leader in Data Warehouse Migration, Data Modernization, and Data Analytics, we empower businesses through automation and help you attain excellence. Our Belief is to Empowering companies to master their businesses and helping them achieve their full potential, we nurture clients with our innovative frameworks. Our embedded values help us strengthen the bond with our clients, ensuring growth for all. Datametica is a preferred partner with leading cloud vendors. We offer solutions related to migration from current Enterprise Data Warehouses to the Cloud determining which of these is best suited to your needs. We are giving Data Wings.
Read more
Company video
DataMetica's video section
DataMetica's video section
Photos
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Connect with the team
Profile picture
Sumangali Desai
Profile picture
Shivani Mahale
Profile picture
Nitish Saxena
Profile picture
Nikita Aher
Profile picture
Pooja Gaikwad
Profile picture
Sayali Kachi
Profile picture
syed raza
Company social profiles
bloglinkedintwitterfacebook
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring

Similar jobs

Hyderabad
7 - 12 yrs
₹12L - ₹24L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Skills

Proficient experience of minimum 7 years into Hadoop. Hands-on experience of minimum 2 years into AWS - EMR/ S3 and other AWS services and dashboards. Good experience of minimum 2 years into Spark framework. Good understanding of Hadoop Eco system including Hive, MR, Spark and Zeppelin. Responsible for troubleshooting and recommendation for Spark and MR jobs. Should be able to use existing logs to debug the issue. Responsible for implementation and ongoing administration of Hadoop infrastructure including monitoring, tuning and troubleshooting Triage production issues when they occur with other operational teams. Hands on experience to troubleshoot incidents, formulate theories and test hypothesis and narrow down possibilities to find the root cause.
Read more
Amagi Media Labs
at Amagi Media Labs
3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
12 - 15 yrs
₹50L - ₹60L / yr
Data Science
Machine Learning (ML)
ETL
Data Warehouse (DWH)
Amazon Web Services (AWS)
+5 more
Job Title: Data Architect
Job Location: Chennai

Job Summary
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Primary Responsibilities
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
architecture designs/patterns.
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
highly desirable.
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
databases.
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
is desirable.
• This role requires 15+ years of data solution architecture, design and development
delivery experience.
• Solid experience in Agile methodologies (Kanban and SCRUM)
Required Skills
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
Spark.
• Creative view of markets and technologies combined with a passion to create the
future.
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery

Preferred Skills:
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
plus
● Understanding of Digital web events, ad streams, context models

About Condé Nast

CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
Read more
Product based company
Bengaluru (Bangalore)
3 - 12 yrs
₹5L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+6 more

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
Nexsys
at Nexsys
2 recruiters
Kiran Basavaraj  Nirakari
Posted by Kiran Basavaraj Nirakari
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹15L / yr
NumPy
pandas
MongoDB
SQL
NOSQL Databases
+2 more

What we look for: 

We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms. 

Responsibilities: 

  • Work with multiple stakeholders to gather the requirements of data or analysis and take action on them. 
  • Write new data pipelines and maintain the existing pipelines. 
  • Person will be gathering data from various DB’s and will be finding the required metrics out of it. 

Required Skills: 

  • Experience with python and Libraries like Pandas,and Numpy. 
  • Experience in SQL and understanding of NoSQL DB’s. 
  • Hands-on experience in Data engineering. 
  • Must have good analytical skills and knowledge of statistics. 
  • Understanding of Data Science concepts. 
  • Bachelor degree in Computer Science or related field. 
  • Problem-solving skills and ability to work under pressure. 

Nice to have: 

  • Experience in MongoDB or any NoSql DB. 
  • Experience in ElasticSearch. 
  • Knowledge of Tableau, Power BI or any other visualization tool.
Read more
Clairvoyant India Private Limited
Taruna Roy
Posted by Taruna Roy
Remote, Pune
3 - 8 yrs
₹4L - ₹15L / yr
Big Data
Hadoop
Java
Spark
Hibernate (Java)
+5 more
ob Title/Designation:
Mid / Senior Big Data Engineer
Job Description:
Role: Big Data EngineerNumber of open positions: 5Location: PuneAt Clairvoyant, we're building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer's business problems by delivering products designed with best in class engineering practices and a commitment to keep the total cost of ownership to a minimum.
Must Have:
  • 4-10 years of experience in software development.
  • At least 2 years of relevant work experience on large scale Data applications.
  • Strong coding experience in Java is mandatory
  • Good aptitude, strong problem solving abilities, and analytical skills, ability to take ownership as appropriate
  • Should be able to do coding, debugging, performance tuning and deploying the apps to Prod.
  • Should have good working experience on
  • o Hadoop ecosystem (HDFS, Hive, Yarn, File formats like Avro/Parquet)
  • o Kafka
  • o J2EE Frameworks (Spring/Hibernate/REST)
  • o Spark Streaming or any other streaming technology.
  • Strong coding experience in Java is mandatory
  • Ability to work on the sprint stories to completion along with Unit test case coverage.
  • Experience working in Agile Methodology
  • Excellent communication and coordination skills
  • Knowledgeable (and preferred hands on) - UNIX environments, different continuous integration tools.
  • Must be able to integrate quickly into the team and work independently towards team goals
Role & Responsibilities:
  • Take the complete responsibility of the sprint stories' execution
  • Be accountable for the delivery of the tasks in the defined timelines with good quality.
  • Follow the processes for project execution and delivery.
  • Follow agile methodology
  • Work with the team lead closely and contribute to the smooth delivery of the project.
  • Understand/define the architecture and discuss the pros-cons of the same with the team
  • Involve in the brainstorming sessions and suggest improvements in the architecture/design.
  • Work with other team leads to get the architecture/design reviewed.
  • Work with the clients and counter-parts (in US) of the project.
  • Keep all the stakeholders updated about the project/task status/risks/issues if there are any.
Education: BE/B.Tech from reputed institute.
Experience: 4 to 9 years
Keywords: java, scala, spark, software development, hadoop, hive
Locations: Pune
Read more
Namaste Credit
at Namaste Credit
9 recruiters
Abhishek J Shiri
Posted by Abhishek J Shiri
Remote, Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹10L / yr
analyst
analysis
DA
SQL
Joint application design
+3 more

About the Role

The following Job Specification is intended to reflect the nature, range and context of the work. It identifies the main requirements of the job but is not an exhaustive list of duties.

 

The Data Analyst role is a vital role as to be a sole point of contact responsible for closely working with various teams internally and closely working with the CTO.  This role typically requires the resource to analyse the vast number of datasets and provide a meaningful impact and the result of such data with combination of growth of the company to the management. The role handles large responsibilities of closely communicating with other departments, thus must be someone who really loves his role.

 

Job Description

  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Develop and implement, data analytics and other strategies that optimize statistical efficiency and quality
  • Strong experience at working on databases and good at analytical reasoning
  • Acquire data from Finance, Technology, Marketing and Sales data sources and maintain data systems
  • Identify, analyze, and interpret trends or patterns in complex data sets
  • Should be able to highlight Incorrect data and record the pattern of Drawbacks relating to the cleansing of data modules
  • Must be involved in reconciliation of data sets and components across verticals depending on the need
  • Filter and “clean” data by reviewing computer reports, printouts, and performance indicators to locate and correct code problems
  • Work with the management to prioritize business and information needs
  • Locate and define new process improvement opportunities

 

 

Required Skills:

 

  • Proven working experience as a data analyst or business data analyst
  • Sound experience in MySQL, Postgres, Oracle databases and fluent in SQL scripts
  • https://resources.workable.com/data-scientist-analysis-interview-questions">Technical expertiseregarding data models, database design development, data mining and segmentation techniques
  • Strong knowledge of and experience with reporting packages (Business Objects etc), databases (SQL etc), programming (XML, Javascript, or ETL frameworks)
  • Knowledge of statistics and experience using statistical packages for analyzing datasets (Excel, SPSS, SAS etc)
  • Strong https://resources.workable.com/analytical-skills-interview-questions">analytical skillswith the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Adept at queries, report writing and presenting findings
  • BS in Mathematics, Economics, Computer Science, Information Management or Statistics

 

Read more
Bengaluru (Bangalore)
2 - 3 yrs
₹15L - ₹20L / yr
Python
Scala
Hadoop
Spark
Data Engineer
+4 more
  • We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
  • The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
  • The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
  • Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
  • You should be able to work in a high-volume environment, have outstanding planning and organisational skills.

 

Qualifications for Data Engineer

 

  • Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.

 

What we're looking for?

 

  • Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
  • Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
  • Experience with data pipeline and workflow management tools: Luigi, Airflow.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
  • Experience with stream-processing systems: Storm, Spark-Streaming.
  • Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Read more
Consulting Leader
Pune, Mumbai
8 - 10 yrs
₹8L - ₹16L / yr
Data integration
talend
Hadoop
Integration
Java
+1 more

 

Job Description for :

Role: Data/Integration Architect

Experience – 8-10 Years

Notice Period: Under 30 days

Key Responsibilities: Designing, Developing frameworks for batch and real time jobs on Talend. Leading migration of these jobs from Mulesoft to Talend, maintaining best practices for the team, conducting code reviews and demos.

Core Skillsets:

Talend Data Fabric - Application, API Integration, Data Integration. Knowledge on Talend Management Cloud, deployment and scheduling of jobs using TMC or Autosys.

Programming Languages - Python/Java
Databases: SQL Server, Other Databases, Hadoop

Should have worked on Agile

Sound communication skills

Should be open to learning new technologies based on business needs on the job

Additional Skills:

Awareness of other data/integration platforms like Mulesoft, Camel

Awareness Hadoop, Snowflake, S3

Read more
NeoQuant Solutions Pvt Ltd
Shehnaz Siddiki
Posted by Shehnaz Siddiki
Mumbai
3 - 6 yrs
₹8L - ₹11L / yr
Microsoft Business Intelligence (MSBI)
SSIS
SQL Server Reporting Services (SSRS)
SQL server
Microsoft SQL Server
+3 more

MSBI Developer- 

We have the following opening in our organization:

Years of Experience: Experience of  4-8 years. 

Location- Mumbai ( Thane)/BKC/Andheri
Notice period: Max 15 days or Immediate 

Educational Qualification: MCA/ME/Msc-IT/BE/B-Tech/BCA/BSC IT in Computer Science/B.Tech

Requirements:

  •   3- 8 years of consulting or relevant work experience
  • Should be good in SQL Server 2008 R2 and above.
  • Should be excellent at SQL, SSRS & SSIS, SSAS,
  • Data modeling, Fact & dimension design, work on a data warehouse or dw architecture design.
  • Implementing new technology like power BI, power bi modeling.  
  • Knowledge of Azure or R-programming is an added advantage.
  • Experiences in BI and Visualization Technology (Tableau, Power  BI).
  • Advanced T-SQL programming skill
  • Can scope out a simple or semi-complex project based on business requirements and achievable benefits
  • Evaluate, design, and implement enterprise IT-based business solutions, often working on-site to help customers deploy their solutions.
Read more
Aptus Data LAbs
at Aptus Data LAbs
1 recruiter
Merlin Metilda
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos