i

SQL Developer

i
Posted by Nikita Aher
Apply to this job
i
Pune
i
2 - 6 yrs
i
₹3L - ₹15L / yr
Skills
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
Data Warehouse (DWH)
Big Data
Hadoop
Job description

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
About Datametica Solutions Private Limited

A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.


We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.


Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!

Eagle – Data warehouse Assessment & Migration Planning Product

Raven – Automated Workload Conversion Product

Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.


Why join us!

Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.


Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy


Check out more about us on our website below!

www.datametica.com

Founded
2013
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
Why apply to jobs via CutShort
i
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
i
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
i
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
6212
Companies hiring
Similar jobs
i
Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
Hadoop
Machine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Computer Vision
Predictive modelling
TensorFlow
PyTorch
Python
i
Pune, Bengaluru (Bangalore)
i
5 - 8 yrs
i
₹15L - ₹22L / yr

Senior Data Scientist - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

As a Senior Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,

from developing new models using pre-existing components to enable current systems to

be more intelligent. You should be able to train models using existing data and use them in

most creative manner to deliver the smartest experience to customers. You will have to

develop multiple AI applications that push the threshold of intelligence in machines.


Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.






Your bucket of Undertakings :

  1. Collaborate with team members to develop new models to be used for classification problems
  2. Work on software profiling, performance tuning and analysis, and other general software engineering tasks
  3. Use independent judgment to take existing data and build new models from it
  4. Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
  5. Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
  6. Lead AI R&D initiatives to include prototypes and minimum viable products
  7. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  8. Build reusable and scalable solutions for use across the customer base
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  11. Identify new business opportunities and prioritize pursuits for AI 
  12. Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals

Education & Experience : 

  1. BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 5+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Research and implement novel machine learning and statistical approaches
  7. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  8. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team

Accomplishment Set: 


  1. Extensive experience with Hadoop and Machine learning algorithms
  2. Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
  3. Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
  4. Passion for solving NLP problems
  5. Experience with specialized tools and project for working with natural language processing
  6. Knowledge of machine learning frameworks like Tensorflow, Pytorch
  7. Experience with software version control systems like Github
  8. Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Read more
Job posted by
i
Mishita Juneja
Apply for job
i
Founded 2011  •  Product  •  500-1000 employees  •  Profitable
SQL
Python
Trend analysis
R
bigquery
BigQuery
i
Mumbai, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
i
5 - 9 yrs
i
Best in industry
MX Player is the world’s best video player with an install base of 500+ million worldwide and 350+ million in India. We are installed on every second smartphone in India.

We cater to a wide range of entertainment categories including video streaming, music streaming, games and short videos via our MX Player and MX Takatak apps which are our flagship products.

Both MX Player and MX Takatak iOS apps are frequently featured amongst the top 5 apps in the Entertainment category on the Indian App Store. These are built by a small team of engineers based in Mumbai.

 

Roles and responsibility for the same will be: 

 

  1. Year of experience – 5+ years
  2. Ability to synthesize complex data into actionable goals.
  3. Interpersonal skills to work collaboratively with various stakeholders with competing interests.
  4. Find analytical trends and logics to better promote content on the Platform
  5. Evaluate content and carry out qualitative research on Competition Analysis, Viewership trends for OTT.
  6. Building hypothesis and testing it out through relevant KPIs.
  7. Communicate the key findings to programming stakeholders and inculcating best practices to programming team.

 

Skills & Competencies:

 

  1. Good knowledge of BigQuery, SQL, Python/R, MS Excel and Data Infrastructure.
  2. Ability to work as a team player in a target driven work environment meeting deadline.
  3. Excellent Time Management Skills.
  4. Strong Communication Skills.
  5. Proficient in MS Office Suite
Read more
Job posted by
i
Mittal Soni
Apply for job
i
Founded 2016  •  Product  •  20-100 employees  •  Bootstrapped
Java
Python
Hadoop
Apache Hive
Apache Kafka
Spark
i
Remote, Bengaluru (Bangalore)
i
2 - 10 yrs
i
₹2L - ₹15L / yr

Job Title:

Data Engineer (Remote)
 

Job Description

You will work on:

 

We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights. 

 

What you will do (Responsibilities):

Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines. 

You will be responsible for:

  • Dealing with customer data and building highly efficient pipelines
  • Building insights dashboards
  • Troubleshooting data loss, data inconsistency, and other data-related issues
  • Maintaining backend services (written in Golang) for metadata generation
  • Providing prompt support and solutions for Product, CRM, and Marketing partners

 

What you bring (Skills):

2+ year of experience in data engineering

  • Coding experience with one of the following languages: Golang, Java, Python, C++
  • Fluent in SQL
  • Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive

 

Great if you know (Skills):

  • T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome.
  • Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus
  • Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc.

 

Advantage Cognologix:

 Higher degree of autonomy, startup culture & small teams

 Opportunities to become expert in emerging technologies

 Remote working options for the right maturity level

 Competitive salary & family benefits

 Performance based career advancement

 

 

About Cognologix:

Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals.

We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way.

 

Skills:

JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA

 

Thanks & regards,

Cognologix- HR Dept. 

Read more
Job posted by
i
Rupa Kadam
Apply for job
Big Data
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Hadoop
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
i
Remote, Bengaluru (Bangalore)
i
4 - 8 yrs
i
₹4L - ₹16L / yr
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Job posted by
i
Sanjay Biswakarma
Apply for job
i
at An IT Services Major, hiring for a leading insurance player.
Agency job
Microsoft Windows Azure
Hadoop
Big Data
Apache Kafka
Hbase
i
Chennai
i
3 - 5 yrs
i
₹5L - ₹10L / yr

Client  An IT Services Major, hiring for a leading insurance player.

 

 

Position: SENIOR CONSULTANT

 

Job Description:

 

  • Azure admin- senior consultant with HD Insights(Big data)

 

Skills and Experience

 

  • Microsoft Azure Administrator certification
  • Bigdata project experience in Azure HDInsight Stack. big data processing frameworks such as Spark, Hadoop, Hive, Kafka or Hbase.
  • Preferred: Insurance or BFSI domain experience
  • 5 to 5 years of experience is required.
Read more
Job posted by
i
Vanshika kaur
Apply for job
i
Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Windows Azure
Python
SQL
Spark
Scala
PySpark
Azure databricks
i
Remote, Bengaluru (Bangalore)
i
3 - 6 yrs
i
₹10L - ₹20L / yr

Azure – Data Engineer

  • At least 2 years hands on experience working with an Agile data engineering team working on big data pipelines using Azure in a commercial environment.
  • Dealing with senior stakeholders/leadership
  • Understanding of Azure data security and encryption best practices. [ADFS/ACLs]

Data Bricks –experience writing in and using data bricks Using Python to transform, manipulate data.

Data Factory – experience using data factory in an enterprise solution to build data pipelines. Experience calling rest APIs.

Synapse/data warehouse – experience using synapse/data warehouse to present data securely and to build & manage data models.

Microsoft SQL server – We’d expect the candidate to have come from a SQL/Data background and progressed into Azure

PowerBI – Experience with this is preferred

Additionally

  • Experience using GIT as a source control system
  • Understanding of DevOps concepts and application
  • Understanding of Azure Cloud costs/management and running platforms efficiently
Read more
Job posted by
i
Vishal Sharma
Apply for job
i
Founded 2009  •  Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
SQL server
Linear regression
Predictive modelling
i
Chennai
i
- yrs
i
₹10L - ₹20L / yr
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Read more
Job posted by
i
Vinodhkumar Panneerselvam
Apply for job
i
at Product / Internet / Media Companies
Agency job
Data processing
Python
Data engineering
Big Data
HDFS
Spark
Hadoop
Data lake
i
Bengaluru (Bangalore)
i
- yrs
i
₹15L - ₹30L / yr

REQUIREMENT:

  •  Previous experience of working in large scale data engineering
  •  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
  •  Previous experience of architecting and designing backend for large scale data processing.
  •  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
  •  Hands-on and have the ability to contribute a key portion of data engineering backend.
  •  Self-inspired and motivated to drive for exceptional results.
  •  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
  •  Familiarity and experience working with different DB technologies and how to scale them.

RESPONSIBILITY:

  •  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
  •  Build data engineering workflow for large scale data processing.
  •  Discover opportunities in data acquisition.
  •  Bring industry best practices for data engineering workflow.
  •  Develop data set processes for data modelling, mining and production.
  •  Take additional tech responsibilities for driving an initiative to completion
  •  Recommend ways to improve data reliability, efficiency and quality
  •  Goes out of their way to reduce complexity.
  •  Humble and outgoing - engineering cheerleaders.
Read more
Job posted by
i
Meenu Singh
Apply for job
i
Founded 2015  •  Services  •  100-1000 employees  •  Profitable
Hadoop
Spark
SQL
Big Data
Python
Relational Database (RDBMS)
i
Remote, NCR (Delhi | Gurgaon | Noida)
i
- yrs
i
₹5L - ₹12L / yr

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
Job posted by
i
Tapan Sahani
Apply for job
i
Founded 2012  •  Product  •  100-500 employees  •  Raised funding
Big Data
Apache Hive
NOSQL Databases
MongoDB
Web Scraping
redshift
i
Mumbai, Navi Mumbai
i
- yrs
i
₹30L - ₹90L / yr
Your Role: • You will lead the strategy, planning, and engineering for Data at Dream11 • Build a robust realtime & batch analytics platform for analytics & machine-learning • Design and develop the Data Model for our data warehouse and other data engineering solutions • Collaborate with various departments to develop, maintain a data platform solution and recommend emerging technologies for data storage, processing and analytics MUST have​: • 9+ years of experience in data engineering, data modelling, schema design and 5+ years of programming expertise in Java or Scala • Understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) • Developed applications that work with NoSQL stores (e.g. ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) • Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc • Bachelor/Master in Computer Science/Engineering or related technical degree Bonus: • Experience in cloud based data stores like Redshift and Big Query is an advantage • Love sports – especially cricket and football • Have worked previously in a high-growth tech startup
Read more
Job posted by
i
Vivek Pandey
Apply for job
Did not find a job you were looking for?
i
Search
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
iiiii
Want to apply for this role at Datametica Solutions Private Limited?
i
Apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.