Data Engineer (Azure)

at Scry Analytics

DP
Posted by Siddarth Thakur
icon
Remote only
icon
3 - 8 yrs
icon
₹15L - ₹20L / yr
icon
Full time
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Windows Azure
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
SQL
NOSQL Databases
Apache Kafka

Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)

Salary: Competitive as per Industry Standard

We are expanding our Data Engineering Team and hiring passionate professionals with extensive

knowledge and experience in building and managing large enterprise data and analytics platforms. We

are looking for creative individuals with strong programming skills, who can understand complex

business and architectural problems and develop solutions. The individual will work closely with the rest

of our data engineering and data science team in implementing and managing Scalable Smart Data

Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale

Processing Clusters, Data Mining and Search Engines.

What You’ll Need:

  • 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal

Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied

data types.

  • Proficiency in Python, Linux and shell scripting.
  • Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
    ● Strong experience in developing the infrastructure required for data ingestion, optimal

extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory,  Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).

  • Working knowledge of github or other version control tools.
  • Experience with creating Restful web services and API platforms.
  • Work with data science and infrastructure team members to implement practical machine

learning solutions and pipelines in production.

  • Experience with cloud providers like Azure/AWS/GCP.
  • Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
  • Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
  • Strong analytic skills related to working with unstructured datasets.

 

Good to have (to filter or prioritize candidates)

  • Experience with testing libraries such as pytest for writing unit-tests for the developed code.
  • Knowledge of Machine Learning algorithms and libraries would be good to have,

implementation experience would be an added advantage.

  • Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
  • Knowledge of Azure functions , Elastic search etc will be good to have.

 

  • Having experience with model versioning (mlflow) and data versioning will be beneficial
  • Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Read more

About Scry Analytics

Scry Analytics is into AI-based software development for data analysis, fraud and anomaly detection and IoT data analytics.
Read more
Founded
2015
Type
Product
Size
100-500 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Customer Scientist

at Crayon Data

Founded 2012  •  Product  •  100-500 employees  •  Raised funding
SQL
Python
Analytical Skills
Data modeling
Data Visualization
Statistical Modeling
icon
Chennai
icon
5 - 8 yrs
icon
₹15L - ₹25L / yr

Role : Senior Customer Scientist 

Experience : 6-8 Years 

Location : Chennai (Hybrid) 
 
 

Who are we? 
 
 

A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine maya.ai, to deliver personal digital experiences centered around taste. The maya.ai platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US. 
 
 

Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start. 
 

 
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all. 
 

 
 

Can you say “Yes, I have!” to the below? 
 
 

  1. Experience with exploratory analysis, statistical analysis, and model development 
     
  1. Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations 
     
  1. Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management. 
     
  1. Strong experience in SQL/ Python/R working efficiently at scale with large data sets 
     
  1. Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications 
     

 
 

 

Can you say “Yes, I will!” to the below? 
 
 

  1. Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions. 
     
  1. Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production. 
     
  1. Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends 
     
  1. Coordinate individual teams to fulfil client requirements and manage deliverable 
     
  1. Communicate and present complex concepts to business audiences 
     
  1. Travel to client locations when necessary  

 

 

Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.  
 
 

More about Crayon: https://www.crayondata.com/  
 

More about maya.ai: https://maya.ai/  

 

 

Read more
Job posted by
Varnisha Sethupathi

Big Data Engineer

at Propellor.ai

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Python
SQL
Spark
Hadoop
Big Data
Data engineering
PySpark
icon
Remote only
icon
1 - 4 yrs
icon
₹5L - ₹15L / yr

Big Data Engineer/Data Engineer


What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge

• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries

These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation

At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment

What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily

Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!

To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0

About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.

Role Description

• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.

Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP)  • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure

• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design

Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams

Preferred Immediate Joiners

Read more
Job posted by
Kajal Jain

Senior Data Analyst

at Rebel Foods

Founded 2011  •  Product  •  500-1000 employees  •  Raised funding
Data Analytics
NOSQL Databases
Python
NumPy
pandas
Data Visualization
SQL
icon
Mumbai
icon
6 - 10 yrs
icon
₹16L - ₹28L / yr

About Rebel Foods:

World's leading consumer companies are all technology / new age companies - Amazon (retail), Airbnb (Hospitality), Uber (mobility), Netflix / Spotify (Entertainment). The only sector, where traditional companies are still the largest ones is restaurants - McDonald's (with a market cap of 130 BN USD). With Food Delivery growing exponentially worldwide, there is an opportunity to build the world's most valuable restaurant company on the internet, superfast. We have the formula to be that company. Today, we can safely say we are world's largest delivery only / internet restaurant company, and by a wide margin with 4000+ individual internet restaurants, in 40+ cities and 7 countries (India, Indonesia, UAE, UK, Malaysia, Singapore, Bangladesh. It's still Day 1, but we know we are onto something very, very big.

 

We have a once-in-a-lifetime opportunity to change the 500-year-old industry that hasn’t been disrupted at its core by technology. For more details on how we are changing the restaurant industry from the core, please refer below. It's important reading if you want to know our company better and really explore working with us:

 

https://medium.com/@jaydeep_barman/why-is-rebel-foods-hiring-super-talented-engineers%20b88586223ebe" target="_blank">link


https://medium.com/@jaydeep_barman/how-to-build-1000-restaurants-in-24-months-the-rebel-method%20cb5b0cea4dc8" target="_blank">link 1


https://medium.com/faasos-story/winning-the-last-frontier-for-consumer-internet-5f2a659c43db%20https://medium.com/faasos-story/a-unique-take-on-food-tech-dcef8c51ba41" target="_blank">link 2


https://medium.com/faasos-story/a-unique-take-on-food-tech-dcef8c51ba41" target="_blank">link 4


The Role

  • Understanding Business and Data Requirements from stakeholders
  • Creating Business Reports
  • Report Automation
  • Creating Dashboards/Visualizations for Business KPIs
  • Data Mining for Business Insights
  • Initiatives based on business needs and requirements
  • Evaluating business processes, uncovering areas of improvement, optimizing strategies and implementing solutions
  • Problem solving skills

 

Requirements:

  • Acquire, aggregate, restructure, and analyse large datasets.
  • Ability to work with various SQL and No-SQL data sources, S3, APIs etc.
  • Data manipulation experience with Python/Pandas
  • Exhibit strong analytic, technical, trouble-shooting, and problem-solving skills
  • Ability to work in a team-oriented, fast-paced environment managing multiple
    priorities
  • Project management and organizational skills

 

 

Unique Opportunity

  • Get a chance to work on interesting Data Scientists/Machine Learning problems in the areas of Computer Vision, Natural Language Processing, and Time-Series Forecasting
  • Get a chance to work on Deep Analytics systems built on large amounts of multi-source data using advanced Data Engineering

 

Languages - SQL, Python (Pandas)

BI Tools - Tableau/Power BI/Quicksight

 

The Rebel Culture

We believe in empowering and growing people to perform the best at their job functions. We follow outcome-oriented, fail-fast iterative & collaborative culture to move fast in building tech solutions.

Rebel is not a usual workplace. The following slides will give you a sense of our culture, how Rebel conducts itself and who will be the best fit for our company. We suggest you go through it before making up your mind.

https://www.slideshare.net/JaydeepBarman/culture-rebel" target="_blank">link 5

 

Read more
Job posted by
Ankit Suman

Senior Data Engineer

at Balbix India Pvt Ltd

Founded 2015  •  Product  •  100-500 employees  •  Profitable
NOSQL Databases
MongoDB
Elastic Search
API
SQL
Python
Scala
Time series
InfluxDB
icon
Gurgaon, Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹15L - ₹25L / yr
WHO WE ARE
Balbix is the world's leading platform for cybersecurity posture automation. Using Balbix, organizations can discover, prioritize and mitigate unseen risks and vulnerabilities at high velocity. With seamless data collection and petabyte-scale analysis capabilities, Balbix is deployed and operational within hours, and helps to decrease breach risk immediately. 
 
Balbix counts many global 1000 companies among its rapidly growing customer base. We are backed by John Chambers (the former CEO and Chairman of Cisco)top Silicon Valley VCs and global investors. We have been called magical, and have received raving reviews as well as customer testimonialsnumerous industry awards, and recognition by Gartner as a Cool Vendor, and by Frost & Sullivan.

ABOUT THIS ROLE
As a senior data engineer you will work on problems related to storing, analyzing, and manipulating very large cybersecurity and IT data sets. You will collaborate closely with our data scientists, threat researchers and network experts to solve real-world problems plaguing cybersecurity. This role requires excellent architecture, design, testing and programming skills as well as experience in large-scale data engineering.

You will:
  • Architect and implement modules for ingesting, storing and manipulating large data sets for a variety of cybersecurity use-cases. 
  • Write code to provide backend support for data-driven UI widgets, web dashboards, workflows, search and API connectors.  
  • Design and implement high performance APIs between our frontend and backend components, and between different backend components. 
  • Build production quality solutions that balance complexity and performance
  • Participate in the engineering life-cycle at Balbix, including designing high quality UI components, writing production code, conducting code reviews and working alongside our backend infrastructure and reliability teams
  • Stay current on the ever-evolving technology landscape of web based UIs and recommend new systems for incorporation in our technology stack.
You are:
  • Product-focused and passionate about building truly usable systems
  • Collaborative and comfortable working with across teams including data engineering, front end, product management, and DevOps
  • Responsible and like to take ownership of challenging problems
  • A good communicator, and facilitate teamwork via good documentation practices
  • Comfortable with ambiguity and able to iterate quickly in response to an evolving understanding of customer needs
  • Curious about the world and your profession, and a constant learner 
You have:
  • BS in Computer Science or related field
  • Atleast 3+ years of experience in the backend web stack (Node.js, MongoDB, Redis, Elastic Search, Postgres, Java, Python, Docker, Kubernetes, etc.)
  • SQL, no-SQL database experience
  • Experience building API (development experience using GraphQL is a plus)
  • Familiarity with issues of web performance, availability, scalability, reliability, and maintainability 
Read more
Job posted by
Garima Saraswat

Principal Data Engineer

at AI-powered cloud-based SaaS solution provider

Agency job
via wrackle
Data engineering
Big Data
Spark
Apache Kafka
Cassandra
Apache ZooKeeper
Data engineer
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon EMR
Amazon S3
Apache Spark
Java
PythonAnywhere
Test driven development (TDD)
Cloud Computing
Google Cloud Platform (GCP)
Agile/Scrum
OOD
Software design
Architecture
YARN
icon
Bengaluru (Bangalore)
icon
8 - 15 yrs
icon
₹25L - ₹60L / yr
Responsibilities

● Able to contribute to the gathering of functional requirements, developing technical
specifications, and test case planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● 60% hands-on coding with architecture ownership of one or more products
● Ability to articulate architectural and design options, and educate development teams and
business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Mentor and guide team members
● Work cross-functionally with various bidgely teams including product management, QA/QE,
various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 8-12 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data EcoSystems.
● Past experience with Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra,
Kafka, Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Ability to lead and mentor technical team members
● Expertise with the entire Software Development Life Cycle (SDLC)
● Excellent communication skills: Demonstrated ability to explain complex technical issues to
both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Business Acumen - strategic thinking & strategy development
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
● Experience with Agile Development, SCRUM, or Extreme Programming methodologies
Read more
Job posted by
Naveen Taalanki

Data Engineer

at Top Management Consulting Company

Python
SQL
Amazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
icon
Gurugram, Bengaluru (Bangalore)
icon
2 - 9 yrs
icon
Best in industry
Greetings!!

We are looking out for a technically driven  "Full-Stack Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. 

Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
Read more
Job posted by
Naveed Mohd

Data Analyst

at Falcon Autotech Pvt Ltd

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Data Analytics
Data Analyst
Tableau
MySQL
SQL
icon
Noida
icon
3 - 7 yrs
icon
₹4L - ₹7L / yr
  • Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
  • Expertise of SQL/PL-SQL -ability to write procedures and create queries for reporting purpose.
  • Must have worked on a reporting tool – Power BI/Tableau etc.
  • Strong knowledge of excel/Google Sheets – must have worked with pivot tables, aggregate functions, logical if conditions.
  • Strong verbal and written communication skills for coordination with departments.
  • An analytical mind and inclination for problem-solving
Read more
Job posted by
Rohit Kaushik

Associate Manager - Database Development (PostgreSQL)

at Sportz Interactive

Founded 2002  •  Products & Services  •  100-1000 employees  •  Profitable
PostgreSQL
PL/SQL
Big Data
Optimization
Stored Procedures
icon
Remote, Mumbai, Navi Mumbai, Pune, Nashik
icon
7 - 12 yrs
icon
₹15L - ₹16L / yr

Job Role : Associate Manager (Database Development)


Key Responsibilities:

  • Optimizing performances of many stored procedures, SQL queries to deliver big amounts of data under a few seconds.
  • Designing and developing numerous complex queries, views, functions, and stored procedures
  • to work seamlessly with the Application/Development team’s data needs.
  • Responsible for providing solutions to all data related needs to support existing and new
  • applications.
  • Creating scalable structures to cater to large user bases and manage high workloads
  • Responsible in every step from the beginning stages of the projects from requirement gathering to implementation and maintenance.
  • Developing custom stored procedures and packages to support new enhancement needs.
  • Working with multiple teams to design, develop and deliver early warning systems.
  • Reviewing query performance and optimizing code
  • Writing queries used for front-end applications
  • Designing and coding database tables to store the application data
  • Data modelling to visualize database structure
  • Working with application developers to create optimized queries
  • Maintaining database performance by troubleshooting problems.
  • Accomplishing platform upgrades and improvements by supervising system programming.
  • Securing database by developing policies, procedures, and controls.
  • Designing and managing deep statistical systems.

Desired Skills and Experience  :

  • 7+ years of experience in database development
  • Minimum 4+ years of experience in PostgreSQL is a must
  • Experience and in-depth knowledge in PL/SQL
  • Ability to come up with multiple possible ways of solving a problem and deciding on the most optimal approach for implementation that suits the work case the most
  • Have knowledge of Database Administration and have the ability and experience of using the CLI tools for administration
  • Experience in Big Data technologies is an added advantage
  • Secondary platforms: MS SQL 2005/2008, Oracle, MySQL
  • Ability to take ownership of tasks and flexibility to work individually or in team
  • Ability to communicate with teams and clients across time zones and global regions
  • Good communication and self-motivated
  • Should have the ability to work under pressure
  • Knowledge of NoSQL and Cloud Architecture will be an advantage
Read more
Job posted by
Nishita Dsouza

Data Engineer

at VIMANA

Founded 2009  •  Product  •  20-100 employees  •  Profitable
Data engineering
Data Engineer
Apache Kafka
Big Data
Java
NodeJS (Node.js)
Elastic Search
Test driven development (TDD)
Python
icon
Remote, Chennai
icon
2 - 5 yrs
icon
₹10L - ₹20L / yr

We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.

 

Responsibilities:

  • You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
  • You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
  • You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
  • You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
  • You will be building Domain Driven APIs as part of a micro-service architecture.
  • You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
  • You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.

 

Requirements:

  • Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
  • 2 to 5 years of product development experience.
  • Experience building applications using Java, NodeJS, or Python.
  • Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
  • Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
  • Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
  • Experience using no-SQL databases like MongoDB or Elasticsearch.
  • Prior experience with container orchestrators like Kubernetes is a plus.
About VIMANA

We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.

Please visit https://govimana.com/ to learn more about what we do.

Why Explore a Career at VIMANA
  • We recognize that our dedicated team members make us successful and we offer competitive salaries.
  • We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
  • You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
  • Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!

VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.

1.Telephonic screening (30 Min )

A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds

2. Technical Rounds

This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.

3. HR Round

Candidate's team and cultural fit will be evaluated during this round

We would proceed with releasing the offer if the candidate clears all the above rounds.

Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Read more
Job posted by
Loshy Chandran

ETL Developer

at PriceSenz

Founded 2015  •  Services  •  20-100 employees  •  Profitable
ETL
SQL
Informatica PowerCenter
icon
Remote only
icon
2 - 15 yrs
icon
₹1L - ₹20L / yr

If you are an outstanding ETL Developer with a passion for technology and looking forward to being part of a great development organization, we would love to hear from you. We are offering technology consultancy services to our Fortune 500 customers with a primary focus on digital technologies. Our customers are looking for top-tier talents in the industry and willing to compensate based on your skill and expertise. The nature of our engagement is Contract in most cases. If you are looking for the next big step in your career, we are glad to partner with you. 

 

Below is the job description for your review.

Extensive hands- on experience in designing and developing ETL packages using SSIS

Extensive experience in performance tuning of SSIS packages

In- depth knowledge of data warehousing concepts and ETL systems, relational databases like SQL Server 2012/ 2014.

Read more
Job posted by
Karthik Padmanabhan
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Scry Analytics?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort