Cutshort logo
Marktine logo
Data Engineer
Data Engineer
Marktine's logo

Data Engineer

Vishal Sharma's profile picture
Posted by Vishal Sharma
3 - 7 yrs
₹5L - ₹10L / yr
Remote, Bengaluru (Bangalore)
Skills
Data Warehouse (DWH)
Spark
Data engineering
skill iconPython
PySpark
Snow flake schema
SQL
PL/SQL
Microsoft Windows Azure
skill iconAmazon Web Services (AWS)

Basic Qualifications

- Need to have a working knowledge of AWS Redshift.

- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.

- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python

- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions

- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies

- Excellent presentation and communication skills, both written and verbal

- Ability to problem-solve and architect in an environment with unclear requirements

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Marktine

Founded :
2016
Type
Size :
100-1000
Stage :
Bootstrapped
About
We are a decision science organization assisting clients to leverage their under-utilized, unstructured, raw, and remote data. to perform business impactful applications using AI/ML technologies. We comprise of Data Enthusiasts (engineers, scientists visualizers), Business Consultants, and Sales Managers with a core focus to address your business growth via data.
Read more
Connect with the team
Profile picture
Vishal Sharma
Company social profiles
linkedin

Similar jobs

ketteq
at ketteq
1 recruiter
Nikhil Jain
Posted by Nikhil Jain
Remote only
5 - 15 yrs
₹20L - ₹35L / yr
ETL
SQL
skill iconPostgreSQL

ketteQ is a supply chain planning and automation platform. We are looking for extremely strong and experienced Technical Consultant to help with system design, data engineering and software configuration and testing during the implementation of supply chain planning solutions. This job comes with a very attractive compensation package, and work-from-home benefit. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you.

 

Responsible for technical design and implementation of supply chain planning solutions.   

 

 

Responsibilities

  • Design and document system architecture
  • Design data mappings
  • Develop integrations
  • Test and validate data
  • Develop customizations
  • Deploy solution
  • Support demo development activities

Requirements

  • Minimum 5 years experience in technical implementation of Enterprise software preferably Supply Chain Planning software
  • Proficiency in ANSI/postgreSQL
  • Proficiency in ETL tools such as Pentaho, Talend, Informatica, and Mulesoft
  • Experience with Webservices and REST APIs
  • Knowledge of AWS
  • Salesforce and Tableau experience a plus
  • Excellent analytical skills
  • Must possess excellent verbal and written communication skills and be able to communicate effectively with international clients
  • Must be a self-starter and highly motivated individual who is looking to make a career in supply chain management
  • Quick thinker with proven decision-making and organizational skills
  • Must be flexible to work non-standard hours to accommodate globally dispersed teams and clients

Education

  • Bachelors in Engineering from a top-ranked university with above average grades
Read more
codersbrain
at codersbrain
1 recruiter
Tanuj Uppal
Posted by Tanuj Uppal
Delhi
4 - 8 yrs
₹2L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
  • Mandatory - Hands on experience in Python and PySpark.

 

  • Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm(IDE).

 

  • Worked on optimizing spark jobs that processes huge volumes of data.

 

  • Hands on experience in version control tools like Git.

 

  • Worked on Amazon’s Analytics services like Amazon EMR, Lambda function etc

 

  • Worked on Amazon’s Compute services like Amazon Lambda, Amazon EC2 and Amazon’s Storage service like S3 and few other services like SNS.

 

  • Experience/knowledge of bash/shell scripting will be a plus.

 

  • Experience in working with fixed width, delimited , multi record file formats etc.

 

  • Hands on experience in tools like Jenkins to build, test and deploy the applications

 

  • Awareness of Devops concepts and be able to work in an automated release pipeline environment.

 

  • Excellent debugging skills.
Read more
AxionConnect Infosolutions Pvt Ltd
Shweta Sharma
Posted by Shweta Sharma
Pune, Bengaluru (Bangalore), Hyderabad, Nagpur, Chennai
5.5 - 7 yrs
₹20L - ₹25L / yr
skill iconDjango
skill iconFlask
Snowflake
Snow flake schema
SQL
+4 more

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur

Notice period: Immediate - 15 days

 

1.      Python Developer with Snowflake

 

Job Description :


  1. 5.5+ years of Strong Python Development Experience with Snowflake.
  2. Strong hands of experience with SQL ability to write complex queries.
  3. Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
  4.  Development of Data Analysis, Data Processing engines using Python
  5. Good Experience in Data Transformation using Python. 
  6.  Experience in Snowflake data load using Python.
  7.  Experience in creating user-defined functions in Snowflake.
  8.  Snowsql implementation.
  9.  Knowledge of query performance tuning will be added advantage.
  10. Good understanding of Datawarehouse (DWH) concepts.
  11.  Interpret/analyze business requirements & functional specification
  12.  Good to have DBT, FiveTran, and AWS Knowledge.
Read more
Spica Systems
at Spica Systems
1 recruiter
Priyanka Bhattacharya
Posted by Priyanka Bhattacharya
Kolkata
3 - 5 yrs
₹7L - ₹12L / yr
skill iconPython
Apache Spark
We are a Silicon Valley based start-up, established in 2019 and are recognized as experts in building products and providing R&D and Software Development services in wide range of leading-edge technologies such as LTE, 5G, Cloud Services (Public -AWS, AZURE,GCP,Private – Openstack) and Kubernetes. It has a highly scalable and secured 5G Packet Core Network, orchestrated by ML powered Kubernetes platform, which can be deployed in various multi cloud mode along with a test tool.Headquartered in San Jose, California, we have our R&D centre in Sector V, Salt Lake Kolkata.
 

Requirements:

  • Overall 3 to 5 years of experience in designing and implementing complex large scale Software.
  • Good in Python is must.
  • Experience in Apache Spark, Scala, Java and Delta Lake
  • Experience in designing and implementing templated ETL/ELT data pipelines
  • Expert level experience in Data Pipeline Orchestrationusing Apache Airflow for large scale production deployment
  • Experience in visualizing data from various tasks in the data pipeline using Apache Zeppelin/Plotly or any other visualization library.
  • Log management and log monitoring using ELK/Grafana
  • Git Hub Integration

 

Technology Stack: Apache Spark, Apache Airflow, Python, AWS, EC2, S3, Kubernetes, ELK, Grafana , Apache Arrow, Java

Read more
Cervello
Agency job
via StackNexus by suman kattella
Hyderabad
5 - 7 yrs
₹5L - ₹15L / yr
Data engineering
Data modeling
Data Warehouse (DWH)
SQL
Windows Azure
+3 more
Contract Jobs - Longterm for 1 year
 
Client - Cervello
Job Role - Data Engineer
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience in  Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
Read more
Reval Analytics
at Reval Analytics
2 recruiters
Jyoti Nair
Posted by Jyoti Nair
Pune
3 - 6 yrs
₹5L - ₹9L / yr
skill iconPython
skill iconDjango
Big Data

Position Name: Software Developer

Required Experience: 3+ Years

Number of positions: 4

Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).

Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting

Compensation - As per industry standards.

Joining - Immediate joining is preferrable.

 

Required Skills:

 

  • Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
  • Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
  • Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
  • Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
  • Experience in building APIs, Multi-threading for tasks on Linux platform
  • Exposure to finance and capital markets will be added advantage. 
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3

 

Company Description:

Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.

 

Website: http://www.virtuaresearch.com" target="_blank">www.virtuaresearch.com

Read more
NeenOpal Intelligent Solutions Private Limited
Pavel Gupta
Posted by Pavel Gupta
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹12L / yr
ETL
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
skill iconPostgreSQL

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Read more
IQVIA
at IQVIA
6 recruiters
Sony Shetty
Posted by Sony Shetty
Remote, Kochi (Cochin)
1 - 5 yrs
₹4L - ₹10L / yr
skill iconPython
skill iconScala
Spark
Big Data
skill iconData Science
+1 more
Job Description Summary
Skill sets in Job Profile
1)Machine learning development using Python or Scala Spark
2)Knowledge of multiple ML algorithms like Random forest, XG boost, RNN, CNN, Transform learning etc..
3)Aware of typical challenges in machine learning implementation and respective applications

Good to have
1)Stack development or DevOps team experience
2)Cloud service (AWS, Cloudera), SAAS, PAAS
3)Big data tools and framework
4)SQL experience

Read more
YCH Logistics
at YCH Logistics
1 recruiter
Sanatan Upmanyu
Posted by Sanatan Upmanyu
NCR (Delhi | Gurgaon | Noida)
0 - 5 yrs
₹2L - ₹5L / yr
skill iconPython
skill iconDeep Learning
MySQL
Job Description: Data Science Analyst/ Data Science Senior Analyst Job description KSTYCH is seeking a Data Science Analyst to join our Data Science team. Individuals in this role are expected to be comfortable working as a software engineer and a quantitative researcher, should have a significant theoretical foundation in mathematical statistics. The ideal candidate will have a keen interest in the study of Pharma sector, network biology, text mining, machine learning, and a passion for identifying and answering questions that help us build the best consulting resource and continuous support to other teams. Responsibilities Work closely with a product scientific, medical, business development and commercial to identify and answer important healthcare/pharma/biology questions. Answer questions by using appropriate statistical techniques and tools on available data. Communicate findings to project managers and team managers. Drive the collection of new data and the refinement of existing data sources Analyze and interpret the results of an experiments Develop best practices for instrumentation and experimentation and communicate those to other teams Requirements B. Tech, M.Tech, M.S. or Ph.D. in a relevant technical field, or 1+ years experience in a relevant role Extensive experience solving analytical problems using quantitative approaches Comfort manipulating and analyzing complex, high-volume, high-dimensionality data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner Fluency with at least one scripting language such as Python or PHP Familiarity with relational databases and SQL Experience working with large data sets, experience working with distributed computing tools a plus (KNIME, Map/Reduce, Hadoop, Hive, etc)
Read more
Atyeti Inc
at Atyeti Inc
3 recruiters
Yash G
Posted by Yash G
Pune
5 - 8 yrs
₹8L - ₹16L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconPython
skill iconR Programming
+3 more
• Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them. • Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence). • Programming experience in Python. • Knowledge of machine learning frameworks like Tensorflow. • Experience with software version control systems like Github. • Understands the concept of Big Data like Hadoop, MongoDB, Apache Spark
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos