Azure Cloud Engineer

at Synergetic IT Services India Pvt Ltd

icon
Mumbai
icon
2 - 5 yrs
icon
₹2L - ₹8L / yr
icon
Full time
Skills
ETL
Informatica
Data Warehouse (DWH)
Microsoft Windows Azure
Big Data
Amazon Web Services (AWS)
Responsible for the evaluation of cloud strategy and program architecture
2. Responsible for gathering system requirements working together with application architects
and owners
3. Responsible for generating scripts and templates required for the automatic provisioning of
resources
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
Services.
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
and tickets.
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
Technical:
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Azure Developer (Power BI Developer )

at a global business process management company

Agency job
via Jobdost
Business Intelligence (BI)
PowerBI
Windows Azure
Git
SVN
Hadoop
Amazon Web Services (AWS)
Salesforce
SAP
HANA
SQL server
Apache Synapse
Flat file
Data Visualization
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹14L - ₹20L / yr

Power BI Developer(Azure Developer )

Job Description:

Senior visualization engineer with understanding in Azure Data Factory & Databricks to develop and deliver solutions that enable delivery of information to audiences in support of key business processes.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business and technical counterparts.

 

Desired Competencies:

  • Strong designing concepts of data visualization centered on business user and a knack of communicating insights visually.
  • Ability to produce any of the charting methods available with drill down options and action-based reporting. This includes use of right graphs for the underlying data with company themes and objects.
  • Publishing reports & dashboards on reporting server and providing role-based access to users.
  • Ability to create wireframes on any tool for communicating the reporting design.
  • Creation of ad-hoc reports & dashboards to visually communicate data hub metrics (metadata information) for top management understanding.
  • Should be able to handle huge volume of data from databases such as SQL Server, Synapse, Delta Lake or flat files and create high performance dashboards.
  • Should be good in Power BI development
  • Expertise in 2 or more BI (Visualization) tools in building reports and dashboards.
  • Understanding of Azure components like Azure Data Factory, Data lake Store, SQL Database, Azure Databricks
  • Strong knowledge in SQL queries
  • Must have worked in full life-cycle development from functional design to deployment
  • Intermediate understanding to format, process and transform data
  • Should have working knowledge of GIT, SVN
  • Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
  • Basic understanding of data modelling and ability to combine data from multiple sources to create integrated reports

 

Preferred Qualifications:

  • Bachelor's degree in Computer Science or Technology
  • Proven success in contributing to a team-oriented environment
Job posted by
Saida Jabbar

Data Engineering Manager

at Porter.in

Founded 2014  •  Services  •  100-1000 employees  •  Profitable
Python
SQL
Spark
Amazon Web Services (AWS)
Team Management
icon
Bengaluru (Bangalore)
icon
7 - 12 yrs
icon
₹25L - ₹35L / yr

Manager | Data Engineering

Bangalore | Full Time

Company Overview:

At Porter, we are passionate about improving productivity. We want to help businesses, large and small, optimize their last-mile operations and empower them to unleash the growth of their core functions. Last mile delivery logistics is one of the biggest and fastest growing sectors of the economy with a market cap upwards of 50 billion USD and a growth rate exceeding 15% CAGR.

Porter is the fastest growing leader in this sector with operations in 14 major cities, a fleet size exceeding 1L registered and 50k active driver partners and a customer base with 3.5M being monthly active. Our industry-best technology platform has raised over 50 million USD from investors including Sequoia Capital, Kae Capital, Mahindra group and LGT Aspada. We are addressing a massive problem and going after a huge market.

We’re trying to create a household name in transportation and our ambition is to disrupt all facets of last mile logistics including warehousing and LTL transportation. At Porter, we’re here to do the best work of our lives.

If you want to do the same and love the challenges and opportunities of a fast paced work environment, then we believe Porter is the right place for you.

 

Responsibilities

Data Strategy and Alignment

  • Work closely with data analysts and business / product teams to understand requirements and provide data ready for analysis and reporting.
  • Apply, help define, and champion data governance : data quality, testing, documentation, coding best practices and peer reviews.
  • Continuously discover, transform, test, deploy, and document data sources and data models.
  • Work closely with the Infrastructure team to build and improve our Data Infrastructure.
  • Develop and execute data roadmap (and sprints) - with a keen eye on industry trends and direction.

 

 

 

Data Stores and System Development

  • Design and implement high-performance, reusable, and scalable data models for our data warehouse to ensure our end-users get consistent and reliable answers when running their own analyses.
  • Focus on test driven design and results for repeatable and maintainable processes and tools.
  • Create and maintain optimal data pipeline architecture - and data flow logging framework.
  • Build the data products, features, tools, and frameworks that enable and empower Data, and Analytics teams across Porter.

Project Management

  • Drive project execution using effective prioritization and resource allocation.
  • Resolve blockers through technical expertise, negotiation, and delegation.
  • Strive for on-time complete solutions through stand-ups and course-correction.

Team Management

  • Manage and elevate team of 5-8 members.
  • Do regular one-on-ones with teammates to ensure resource welfare.
  • Periodic assessment and actionable feedback for progress.
  • Recruit new members with a view to long-term resource planning through effective collaboration with the hiring team.

Process design

  • Set the bar for the quality of technical and data-based solutions the team ships.
  • Enforce code quality standards and establish good code review practices - using this as a nurturing tool.
  • Set up communication channels and feedback loops for knowledge sharing and stakeholder management.
  • Explore the latest best practices and tools for constant up-skilling.

 

Data Engineering Stack

  • Analytics : Python / R / SQL + Excel / PPT, Google Colab
  • Database : PostgreSQL, Amazon Redshift, DynamoDB, Aerospike
  • Warehouse : Redshift, S3
  • ETL : Airflow + DBT + Custom-made Python + Amundsen (Discovery)
  • Business Intelligence / Visualization : Metabase + Google Data Studio
  • Frameworks : Spark + Dash + StreamLit
  • Collaboration : Git, Notion
Job posted by
Satyajit Mittra

Data Engineer

at CustomerGlu

Founded 2016  •  Products & Services  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
MongoDB
DynamoDB
Apache
Apache Kafka
Hadoop
pandas
NumPy
Python
Machine Learning (ML)
Big Data
API
Data Structures
AWS Lambda
Glue semantics
icon
Bengaluru (Bangalore)
icon
2 - 3 yrs
icon
₹8L - ₹12L / yr

CustomerGlu is a low code interactive user engagement platform. We're backed by Techstars and top-notch VCs from the US like Better Capital and SmartStart.

As we begin building repeatability in our core product offering at CustomerGlu - building a high-quality data infrastructure/applications is emerging as a key requirement to further drive more ROI from our interactive engagement programs and to also get ideas for new campaigns.

Hence we are adding more team members to our existing data team and looking for a Data Engineer.

Responsibilities

  • Design and build a high-performing data platform that is responsible for the extraction, transformation, and loading of data.
  • Develop low-latency real-time data analytics and segmentation applications.
  • Setup infrastructure for easily building data products on top of the data platform.
  • Be responsible for logging, monitoring, and error recovery of data pipelines.
  • Build workflows for automated scheduling of data transformation processes.
  • Able to lead a team

Requirements

  • 3+ years of experience and ability to manage a team
  • Experience working with databases like MongoDB and DynamoDB.
  • Knowledge of building batch data processing applications using Apache Spark.
  • Understanding of how backend services like HTTP APIs and Queues work.
  • Write good quality, maintainable code in one or more programming languages like Python, Scala, and Java.
  • Working knowledge of version control systems like Git.

Bonus Skills

  • Experience in real-time data processing using Apache Kafka or AWS Kinesis.
  • Experience with AWS tools like Lambda and Glue.
Job posted by
Barkha Budhori

Event & Unstructured Data

at They provide both wholesale and retail funding. PM1

Agency job
via Multi Recruit
AWS KINESYS
Data engineering
AWS Lambda
DynamoDB
data pipeline
Data governance
Data processing
Amazon Web Services (AWS)
athena
Audio
Linux/Unix
Python
SQL
WebLogic
KINESYS
Lambda
icon
Mumbai
icon
5 - 7 yrs
icon
₹20L - ₹25L / yr
  • Key responsibility is to design & develop a data pipeline for real-time data integration, processing, executing of the model (if required), and exposing output via MQ / API / No-SQL DB for consumption
  • Provide technical expertise to design efficient data ingestion solutions to store & process unstructured data, such as Documents, audio, images, weblogs, etc
  • Developing API services to provide data as a service
  • Prototyping Solutions for complex data processing problems using AWS cloud-native solutions
  • Implementing automated Audit & Quality assurance Checks in Data Pipeline
  • Document & maintain data lineage from various sources to enable data governance
  • Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc

Skills

  • Programming experience using Python & SQL
  • Extensive working experience in Data Engineering projects, using AWS Kinesys,  AWS S3, DynamoDB, EMR, Lambda, Athena, etc for event processing
  • Experience & expertise in implementing complex data pipeline
  • Strong Familiarity with AWS Toolset for Storage & Processing. Able to recommend the right tools/solutions available to address specific data processing problems
  • Hands-on experience in Unstructured (Audio, Image, Documents, Weblogs, etc) Data processing.
  • Good analytical skills with the ability to synthesize data to design and deliver meaningful information
  • Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
  • Ability to understand business functionality, processes, and flows
  • Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently

Functional knowledge

  • Real-time Event Processing
  • Data Governance & Quality assurance
  • Containerized deployment
  • Linux
  • Unstructured Data Processing
  • AWS Toolsets for Storage & Processing
  • Data Security

 

Job posted by
Sapna Deb

Data Engineer

at Easebuzz

Founded 2016  •  Product  •  100-500 employees  •  Raised funding
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
Apache Kafka
SQL
Amazon Web Services (AWS)
Big Data
DynamoDB
MongoDB
EMR
Amazon Redshift
ETL
Data architecture
Data modeling
icon
Pune
icon
2 - 4 yrs
icon
₹2L - ₹20L / yr

Company Profile:

 

Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.

 

We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.

 

Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.

 


Salary: As per company standards.

 

Designation: Data Engineering

 

Location: Pune

 

Experience with ETL, Data Modeling, and Data Architecture

Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.

Experience with AWS cloud data lake for development of real-time or near real-time use cases

Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing

Build data pipeline frameworks to automate high-volume and real-time data delivery

Create prototypes and proof-of-concepts for iterative development.

Experience with NoSQL databases, such as DynamoDB, MongoDB etc

Create and maintain optimal data pipeline architecture,

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow

Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.

 

Employment Type

Full-time

 

Job posted by
Amala Baby

Data Architect

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Spark
Apache Hive
ETL
Apache Kafka
Data architecture
Google Cloud Platform (GCP)
Python
Java
Scala
Data engineering
icon
Mumbai
icon
5 - 9 yrs
icon
₹15L - ₹22L / yr
JD of Data Architect
As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Collaborate with many teams - engineering and business, to build better data products
● Manage team and handle delivery of 2-3 projects

What We’re Looking For
● Over 4-6 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
● Hands-on experience in cloud platforms like AWS, GCP etc.
● Good communication skills and strong analytical skills
● Experience in team handling and project delivery
Job posted by
Reena Bandekar

Machine Learning Engineer

at SmartJoules

Founded 2015  •  Product  •  100-500 employees  •  Profitable
Machine Learning (ML)
Python
Big Data
Apache Spark
Deep Learning
icon
Remote, NCR (Delhi | Gurgaon | Noida)
icon
3 - 5 yrs
icon
₹8L - ₹12L / yr

Responsibilities:

  • Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world.
  • Verifying data quality, and/or ensuring it via data cleaning.
  • Able to adapt and work fast in producing the output which upgrades the decision making of stakeholders using ML.
  • To design and develop Machine Learning systems and schemes. 
  • To perform statistical analysis and fine-tune models using test results.
  • To train and retrain ML systems and models as and when necessary. 
  • To deploy ML models in production and maintain the cost of cloud infrastructure.
  • To develop Machine Learning apps according to client and data scientist requirements.
  • To analyze the problem-solving capabilities and use-cases of ML algorithms and rank them by how successful they are in meeting the objective.


Technical Knowledge:


  • Worked with real time problems, solved them using ML and deep learning models deployed in real time and should have some awesome projects under his belt to showcase. 
  • Proficiency in Python and experience with working with Jupyter Framework, Google collab and cloud hosted notebooks such as AWS sagemaker, DataBricks etc.
  • Proficiency in working with libraries Sklearn, Tensorflow, Open CV2, Pyspark,  Pandas, Numpy and related libraries.
  • Expert in visualising and manipulating complex datasets.
  • Proficiency in working with visualisation libraries such as seaborn, plotly, matplotlib etc.
  • Proficiency in Linear Algebra, statistics and probability required for Machine Learning.
  • Proficiency in ML Based algorithms for example, Gradient boosting, stacked Machine learning, classification algorithms and deep learning algorithms. Need to have experience in hypertuning various models and comparing the results of algorithm performance.
  • Big data Technologies such as Hadoop stack and Spark. 
  • Basic use of clouds (VM’s example EC2).
  • Brownie points for Kubernetes and Task Queues.      
  • Strong written and verbal communications.
  • Experience working in an Agile environment.
Job posted by
Saksham Dutta

Lead Data Engineer

at Lymbyc

Founded 2012  •  Product  •  100-500 employees  •  Profitable
Apache Spark
Apache Kafka
Druid Database
Big Data
Apache Sqoop
RESTful APIs
Elasticsearch
Apache Ranger
Apache Atlas
kappa
icon
Bengaluru (Bangalore), Chennai
icon
4 - 8 yrs
icon
₹9L - ₹14L / yr
Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus
Job posted by
Venky Thiriveedhi

Computer Vision Scientist - Machine Learning

at FarmGuide

Founded 2016  •  Products & Services  •  100-1000 employees  •  Raised funding
Computer Security
Image processing
OpenCV
Python
Rational ClearCase
SAS GIS
AWS Simple Notification Service (SNS)
Startup
Cloudera
Machine Learning (ML)
Game Design
Data Science
Amazon Web Services (AWS)
icon
NCR (Delhi | Gurgaon | Noida)
icon
0 - 8 yrs
icon
₹7L - ₹14L / yr
FarmGuide is a data driven tech startup aiming towards digitizing the periodic processes in place and bringing information symmetry in agriculture supply chain through transparent, dynamic & interactive software solutions. We, at FarmGuide (https://angel.co/farmguide) help Government in relevant and efficient policy making by ensuring seamless flow of information between stakeholders.Job Description :We are looking for individuals who want to help us design cutting edge scalable products to meet our rapidly growing business. We are building out the data science team and looking to hire across levels.- Solving complex problems in the agri-tech sector, which are long-standing open problems at the national level.- Applying computer vision techniques to satellite imagery to deduce artefacts of interest.- Applying various machine learning techniques to digitize existing physical corpus of knowledge in the sector.Key Responsibilities :- Develop computer vision algorithms for production use on satellite and aerial imagery- Implement models and data pipelines to analyse terabytes of data.- Deploy built models in production environment.- Develop tools to assess algorithm accuracy- Implement algorithms at scale in the commercial cloudSkills Required :- B.Tech/ M.Tech in CS or other related fields such as EE or MCA from IIT/NIT/BITS but not compulsory. - Demonstrable interest in Machine Learning and Computer Vision, such as coursework, open-source contribution, etc.- Experience with digital image processing techniques - Familiarity/Experience with geospatial, planetary, or astronomical datasets is valuable- Experience in writing algorithms to manipulate geospatial data- Hands-on knowledge of GDAL or open-source GIS tools is a plus- Familiarity with cloud systems (AWS/Google Cloud) and cloud infrastructure is a plus- Experience with high performance or large scale computing infrastructure might be helpful- Coding ability in R or Python. - Self-directed team player who thrives in a continually changing environmentWhat is on offer :- High impact role in a young start up with colleagues from IITs and other Tier 1 colleges- Chance to work on the cutting edge of ML (yes, we do train Neural Nets on GPUs) - Lots of freedom in terms of the work you do and how you do it - Flexible timings - Best start-up salary in industry with additional tax benefits
Job posted by
Anupam Arya

Engineering Manager

at Uber

Founded 2012  •  Product  •  500-1000 employees  •  Raised funding
Big Data
Leadership
Engineering Management
Architecture
icon
Bengaluru (Bangalore)
icon
9 - 15 yrs
icon
₹50L - ₹80L / yr
Minimum 5+ years of experience as a manager and overall 10+ years of industry experience in a variety of contexts, during which you've built scalable, robust, and fault-tolerant systems. You have a solid knowledge of the whole web stack: front-end, back-end, databases, cache layer, HTTP protocol, TCP/IP, Linux, CPU architecture, etc. You are comfortable jamming on complex architecture and design principles with senior engineers. Bias for action. You believe that speed and quality aren't mutually exclusive. You've shown good judgement about shipping as fast as possible while still making sure that products are built in a sustainable, responsible way. Mentorship/ Guidance. You know that the most important part of your job is setting the team up for success. Through mentoring, teaching, and reviewing, you help other engineers make sound architectural decisions, improve their code quality, and get out of their comfort zone. Commitment. You care tremendously about keeping the Uber experience consistent for users and strive to make any issues invisible to riders. You hold yourself personally accountable, jumping in and taking ownership of problems that might not even be in your team's scope. Hiring know-how. You're a thoughtful interviewer who constantly raises the bar for excellence. You believe that what seems amazing one day becomes the norm the next day, and that each new hire should significantly improve the team. Design and business vision. You help your team understand requirements beyond the written word and you thrive in an environment where you can uncover subtle details.. Even in the absence of a PM or a designer, you show great attention to the design and product aspect of anything your team ships.
Job posted by
Swati Singh
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Synergetic IT Services India Pvt Ltd?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort