Cutshort logo
Couture.ai logo
Senior Systems Engineer – Big Data
Senior Systems Engineer – Big Data
Couture.ai's logo

Senior Systems Engineer – Big Data

Deleted User's profile picture
Posted by Deleted User
2 - 5 yrs
₹5L - ₹10L / yr (ESOP available)
Full time
Bengaluru (Bangalore)
Skills
Big Data
Hadoop
DevOps
Apache Spark
Spark
Shell Scripting
Docker
Kubernetes
Chef
Amberi
Skills Requirements
 Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
 Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
 Knowledge on python would be desirable.
 Experience with HDP Manager/clients and various dashboards.
 Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
 Experience with automation/configuration management using Chef, Ansible or an equivalent.
 Strong experience with any Linux distribution.
 Basic understanding of network technologies, CPU, memory and storage.
 Database administration a plus.
Qualifications and Education Requirements
 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
 Bachelor degree or equivalent in Computer Science or Information Technology or related fields.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Couture.ai

Founded :
2017
Type
Size
Stage :
Profitable
About
We are building patent-pending AI platform targeted towards vertical-specific solutions. 
The platform is already licensed by Reliance Jio and few European retailers, to empower real-time tailored experiences for their combined >200 million end users.
 
After integrating our SDK with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times.

 

Couture.ai is founded by global innovators and entrepreneurs with experience and self-funding of creating global startup success stories in past. The core team consists of some of the best minds in India in Machine learning and Deep learning research.

 

 

Read more
Connect with the team
Profile picture
prashanta Singh
Profile picture
Ashish Garg
Profile picture
Shobhit Agarwal
Profile picture
Prateek Srivastava
Company social profiles
twitterfacebook

Similar jobs

Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Jio Platforms Limited
at Jio Platforms Limited
3 recruiters
Dixit Nahar
Posted by Dixit Nahar
Navi Mumbai, Mumbai
3 - 5 yrs
₹9L - ₹15L / yr
Python
TensorFlow
Keras
Apache Kafka
Spark
+7 more
Role - Data Scientist / Machine Learning Scientist / Deep Learning Engineer -: 3 - 5 yrs experienced Programming Must Know:- Python, Tensorflow, Keras, Kafka, Spark Must have worked in Video Analytics with at least 2 Deep Learning Models like R-CNN, LSTM, Object Detection Models like YOLO, Object tracking Models like Deep SORT Must have good model training and testing experience with Structured(statistical machine learning) and Unstructured Data Must be good with Statistics. Good to have Data visualization experience in Python or any data visualization tool. Good to have Kubernetes, Multiprocessing experience, MLops like docker, hydra etc

Team:- We are a team of 9 data scientists working on Video Analytics Projects, Data Analytics projects for internal AI requirements of Reliance Industries as well for the external business. At a time, we make progress on multiple projects(atleast 4) in Video Analytics or Data Analytics.
Read more
Product and Service based company
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
LiftOff Software India
at LiftOff Software India
2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more
HCL Technologies
at HCL Technologies
3 recruiters
Agency job
via Saiva System by Sunny Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Hyderabad, Chennai, Pune, Mumbai, Kolkata
5 - 10 yrs
₹5L - ₹20L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more
Exp- 5 + years
Skill- Spark and Scala along with Azure
Location - Pan India

Looking for someone Bigdata along with Azure
Read more
NA
at NA
Agency job
via Talent folks by Rijooshri Saikia
Bengaluru (Bangalore)
15 - 25 yrs
₹10L - ₹15L / yr
Java
Python
Big Data

Job Description

  • Design, development and deployment of highly-available and fault-tolerant enterprise business software at scale.

  • Demonstrate tech expertise to go very deep or broad in solving classes of problems or creating broadly leverage-able solutions.

  • Execute large-scale projects - Provide technical leadership in architecting and building product solutions.

  • Collaborate across teams to deliver a result, from hardworking team members within your group, through smart technologists across lines of business.

  • Be a role model on acting with good judgment and responsibility, helping teams to commit and move forward.

  • Be a humble mentor and trusted advisor for both our talented team members and passionate leaders alike. Deal with differences in opinion in a mature and fair way.

  • Raise the bar by improving standard methodologies, producing best-in-class efficient solutions, code, documentation, testing, and monitoring.

Qualifications

      • 15+ years of relevant engineering experience.

  • Proven record of building and productionizing highly reliable products at scale.

  • Experience with Java and Python

  • Experience with the Big Data technologie is a plus.

  • Ability to assess new technologies and make pragmatic choices that help guide us towards a long-term vision

  • Can collaborate well with several other engineering orgs to articulate requirements and system design

Additional Information

Professional Attributes:
• Team player!

• Great interpersonal skills, deep technical ability, and a portfolio of successful execution.

• Excellent written and verbal communication skills, including the ability to write detailed technical documents.

• Passionate about helping teams grow by inspiring and mentoring engineers.



Read more
Rakuten
at Rakuten
1 video
1 recruiter
Agency job
via zyoin by RAKESH RANJAN
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹38L / yr
Big Data
Spark
Hadoop
Apache Kafka
Apache Hive
+4 more

Company Overview:

Rakuten, Inc. (TSE's first section: 4755) is the largest ecommerce company in Japan, and third largest eCommerce marketplace company worldwide. Rakuten provides a variety of consumer and business-focused services including e-commerce, e-reading, travel, banking, securities, credit card, e-money, portal and media, online marketing and professional sports. The company is expanding globally and currently has operations throughout Asia, Western Europe, and the Americas. Founded in 1997, Rakuten is headquartered in Tokyo, with over 17,000 employees and partner staff worldwide. Rakuten's 2018 revenues were 1101.48 billions yen.   -In Japanese, Rakuten stands for ‘optimism.’ -It means we believe in the future. -It’s an understanding that, with the right mind-set, -we can make the future better by what we do today. Today, our 70+ businesses span e-commerce, digital content, communications and FinTech, bringing the joy of discovery to more than 1.2 billion members across the world.


Website
: https://www.rakuten.com/">https://www.rakuten.com/

Crunchbase : https://www.crunchbase.com/organization/rakuten">Rakuten has raised a total of https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/rakuten">$42.4M in funding over https://www.crunchbase.com/search/funding_rounds/field/organizations/num_funding_rounds/rakuten">2 rounds

Companysize : 10,001 + Employees

Founded : 1997

Headquarters : Tokyo, Japan

Work location : Bangalore (M.G.Road)


Please find below Job Description.


Role Description – Data Engineer for AN group (Location - India)

 

Key responsibilities include:

 

We are looking for engineering candidate in our Autonomous Networking Team. The ideal candidate must have following abilities –

 

  • Hands- on experience in big data computation technologies (at least one and potentially several of the following: Spark and Spark Streaming, Hadoop, Storm, Kafka Streaming, Flink, etc)
  • Familiar with other related big data technologies, such as big data storage technologies (e.g., Phoenix/HBase, Redshift, Presto/Athena, Hive, Spark SQL, BigTable, BigQuery, Clickhouse, etc), messaging layer (Kafka, Kinesis, etc), Cloud and container- based deployments (Docker, Kubernetes etc), Scala, Akka, SocketIO, ElasticSearch, RabbitMQ, Redis, Couchbase, JAVA, Go lang.
  • Partner with product management and delivery teams to align and prioritize current and future new product development initiatives in support of our business objectives
  • Work with cross functional engineering teams including QA, Platform Delivery and DevOps
  • Evaluate current state solutions to identify areas to improve standards, simplify, and enhance functionality and/or transition to effective solutions to improve supportability and time to market
  • Not afraid of refactoring existing system and guiding the team about same.
  • Experience with Event driven Architecture, Complex Event Processing
  • Extensive experience building and owning large- scale distributed backend systems.
Read more
Prescience Decision Solutions
Shivakumar K
Posted by Shivakumar K
Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹20L / yr
Big Data
ETL
Spark
Apache Kafka
Apache Spark
+4 more

The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes

Required Experience, Skills and Qualifications:

  • Hands on experience on Big Data tools/technologies like Spark,  Databricks, Map Reduce, Hive, HDFS.
  • Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
  • Proficiency in any of the programming language: Python/ Scala/  Java with 4+ years’ experience
  • Experience in Cloud infrastructures like MS Azure, Data lake etc
  • Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
Read more
Aptus Data LAbs
at Aptus Data LAbs
1 recruiter
Merlin Metilda
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
Product Based MNC
Remote, Bengaluru (Bangalore)
5 - 9 yrs
₹5L - ₹20L / yr
Apache Spark
Python
Amazon Web Services (AWS)
SQL

 

Job Description

Role requires experience in AWS and also programming experience in Python and Spark

Roles & Responsibilities

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core cloud services needed to fulfil the technical design
  • Design, Develop and Deliver data integration interfaces in the AWS
  • Design, Develop and Deliver data provisioning interfaces to fulfil consumption needs
  • Deliver data models on Cloud platform, it could be on AWS Redshift, SQL.
  • Design, Develop and Deliver data integration interfaces at scale using Python / Spark 
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting right platform services and architecting the solution in a cost-effective manner
  • Manage code and deploy DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

You Have:

  • Minimum 5 years of software development experience
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing and provisioning
  • Delivered data management projects in any of the AWS
  • Translated complex analytical requirements into technical design including data models, ETLs and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Strong knowledge of continuous integration, static code analysis and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have Excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations

 

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos