Cutshort logo
Big data Jobs in Bangalore (Bengaluru)

50+ Big data Jobs in Bangalore (Bengaluru) | Big data Job openings in Bangalore (Bengaluru)

Apply to 50+ Big data Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Big data Job opportunities across top companies like Google, Amazon & Adobe.

icon
Radisys India

at Radisys India

1 recruiter
Sai Kiran
Posted by Sai Kiran
Bengaluru (Bangalore)
5 - 10 yrs
₹5L - ₹25L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
skill iconMongoDB
+4 more

Radisys Corporation, a global leader in open telecom solutions, enables service providers to drive disruption with new open architecture business models. Our innovative technology solutions leverage open reference architectures and standards, combined with open software and hardware, to power business transformation for the telecom industry. Our services organization delivers systems integration expertise necessary to solve complex deployment challenges for communications and content providers.


Job Overview :


We are looking for a Lead Engineer - Java with a strong background in Java development and hands-on experience with J2EE, Springboot, Kubernetes, Microservices, NoSQL, and SQL. As a Lead Engineer, you will be responsible for designing and developing high-quality software solutions and ensuring the successful delivery of projects. role with 7 to 10 years of experience, based in Bangalore, Karnataka, India. This position is a full-time role with excellent growth opportunities.


Qualifications and Skills :


- Bachelor's or master's degree in Computer Science or a related field


- Strong knowledge of Core Java, J2EE, and Springboot frameworks


- Hands-on experience with Kubernetes and microservices architecture


- Experience with NoSQL and SQL databases


- Proficient in troubleshooting and debugging complex system issues


- Experience in Enterprise Applications


- Excellent communication and leadership skills


- Ability to work in a fast-paced and collaborative environment


- Strong problem-solving and analytical skills


Roles and Responsibilities :


- Work closely with product management and cross-functional teams to define requirements and deliverables


- Design scalable and high-performance applications using Java, J2EE, and Springboot


- Develop and maintain microservices using Kubernetes and containerization


- Design and implement data models using NoSQL and SQL databases


- Ensure the quality and performance of software through code reviews and testing


- Collaborate with stakeholders to identify and resolve technical issues


- Stay up-to-date with the latest industry trends and technologies


Read more
xyz

at xyz

Agency job
via HR BIZ HUB by Pooja shankla
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹15L / yr
skill iconJava
Big Data
Apache Hive
Hadoop
Spark

Job Title Big Data Developer

Job Description

Bachelor's degree in Engineering or Computer Science or equivalent OR Master's in Computer Applications or equivalent.

Solid Experience of software development experience and leading teams of engineers and scrum teams.

4+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark).

Solid Datawarehousing concepts.

Knowledge of Financial reporting ecosystem will be a plus.

4+ years of experience within Data Engineering/ Data Warehousing using Big Data technologies will be an addon.

Expert on Distributed ecosystem.

Hands-on experience with programming using Core Java or Python/Scala

Expert on Hadoop and Spark Architecture and its working principle

Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data.

Experience in UNIX shell scripting.

Roles & Responsibilities

Ability to design and develop optimized Data pipelines for batch and real time data processing

Should have experience in analysis, design, development, testing, and implementation of system applications

Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows.

Excellent technical and analytical aptitude

Good communication skills.

Excellent Project management skills.

Results driven Approach.

Mandatory SkillsBig Data, PySpark, Hive

Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
iLink Systems

at iLink Systems

1 video
1 recruiter
Ganesh Sooriyamoorthu
Posted by Ganesh Sooriyamoorthu
Chennai, Pune, Noida, Bengaluru (Bangalore)
5 - 15 yrs
₹10L - ₹15L / yr
Apache Kafka
Big Data
skill iconJava
Spark
Hadoop
+1 more
  • KSQL
  • Data Engineering spectrum (Java/Spark)
  • Spark Scala / Kafka Streaming
  • Confluent Kafka components
  • Basic understanding of Hadoop


Read more
Gipfel & Schnell Consultings Pvt Ltd
Aravind Kumar
Posted by Aravind Kumar
Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
Software Testing (QA)
Test Automation (QA)
Appium
Selenium
skill iconJava
+11 more

Minimum 4 to 10 years of experience in testing distributed backend software architectures/systems.

• 4+ years of work experience in test planning and automation of enterprise software

• Expertise in programming using Java or Python and other scripting languages.

• Experience with one or more public clouds is expected.

• Comfortable with build processes, CI processes, and managing QA Environments as well as working with build management tools like Git, and Jenkins

. • Experience with performance and scalability testing tools.

• Good working knowledge of relational databases, logging, and monitoring frameworks is expected.

Familiarity with system flow like how they interact with an application Eg. Elasticsearch, Mongo, Kafka, Hive, Redis, AWS

Read more
Kloud9 Technologies
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon EMR
EMR
Spark
PySpark
+9 more

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


What we are looking for:

● 3+ years’ experience developing Data & Analytic solutions

● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark

● Experience with relational SQL

● Experience with scripting languages such as Shell, Python

● Experience with source control tools such as GitHub and related dev process

● Experience with workflow scheduling tools such as Airflow

● In-depth knowledge of scalable cloud

● Has a passion for data solutions

● Strong understanding of data structures and algorithms

● Strong understanding of solution and technical design

● Has a strong problem-solving and analytical mindset

● Experience working with Agile Teams.

● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

● Able to quickly pick up new programming languages, technologies, and frameworks

● Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
LiftOff Software India

at LiftOff Software India

2 recruiters
Hameeda Haider
Posted by Hameeda Haider
Remote, Bengaluru (Bangalore)
5 - 8 yrs
₹1L - ₹30L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark

Why LiftOff? 

 

We at LiftOff specialize in product creation, for our main forte lies in helping Entrepreneurs realize their dream. We have helped businesses and entrepreneurs launch more than 70 plus products.

Many on the team are serial entrepreneurs with a history of successful exits.

 

As a Data Engineer, you will work directly with our founders and alongside our engineers on a variety of software projects covering various languages, frameworks, and application architectures.

 

About the Role

 

If you’re driven by the passion to build something great from scratch, a desire to innovate, and a commitment to achieve excellence in your craftLiftOff is a great place for you.


  • Architecture/design / configure the data ingestion pipeline for data received from 3rd party vendors
  • Data loading should be configured with ease/flexibility for adding new data sources & also refresh of the previously loaded data
  • Design & implement a consumer graph, that provides an efficient means to query the data via email, phone, and address information (using any one of the fields or combination)
  • Expose the consumer graph/search capability for consumption by our middleware APIs, which would be shown in the portal
  • Design / review the current client-specific data storage, which is kept as a copy of the consumer master data for easier retrieval/query for subsequent usage


Please Note that this is for a Consultant Role

Candidates who are okay with freelancing/Part-time can apply

Read more
codersbrain

at codersbrain

1 recruiter
Aishwarya Hire
Posted by Aishwarya Hire
Bengaluru (Bangalore)
4 - 6 yrs
₹8L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+1 more
  • Design the architecture of our big data platform
  • Perform and oversee tasks such as writing scripts, calling APIs, web scraping, and writing SQL queries
  • Design and implement data stores that support the scalable processing and storage of our high-frequency data
  • Maintain our data pipeline
  • Customize and oversee integration tools, warehouses, databases, and analytical systems
  • Configure and provide availability for data-access tools used by all data scientists


Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹25L / yr
Cassandra
Technical Architecture
Debugging
Communication Skills

Casandra Architeture- 7+ Yrs- Bangalore

 

Strong knowledge of Cassandra Architecture including read/write paths, hinted handoffs, read repairs, compaction, cluster/replication strategies, client drivers, caching, GC Tuning.

                Experience in writing queries and performance tuning.

                Experience in handling real time Cassandra clusters, debugging and resolution of issues.

                Experience in implementing Keyspaces, Table, Indexes, security, data models & access administration.

                Knowledge in cassandra backup and recovery.

                Good communication skills.

Read more
Netcore Cloud
Mumbai, Navi Mumbai, Bengaluru (Bangalore), Pune
5 - 9 yrs
₹10L - ₹35L / yr
skill iconJava
skill iconSpring Boot
Apache Kafka
RabbitMQ
Cassandra
+3 more

Job Title -Senior Java Developers

Job Description - Backend Engineer - Lead (Java)

Mumbai, India | Engineering Team | Full-time

 

Are you passionate enough to be a crucial part of a highly analytical and scalable user engagement platform?

Are you ready learn new technologies and willing to step out of your comfort zone to explore and learn new skills?

 

If so, this is an opportunity for you to join a high-functioning team and make your mark on our organisation!

 

The Impact you will create:

  • Build campaign generation services which can send app notifications at a speed of 10 million a minute
  • Dashboards to show Real time key performance indicators to clients
  • Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
  • Building highly available & horizontally scalable platform services for ever growing data
  • Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
  • Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
  • You will build backend services and APIs to create scalable engineering systems.
  • As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
  • You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
  • Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
  • Identify and improvise areas of improvement through data insights and research.

 

What we look for?

  • 5-9 years of experience in backend development and must have worked on Java/shell/Perl/python scripting.
  • Solid understanding of engineering best practices, continuous integration, and incremental delivery.
  • Strong analytical skills, debugging and troubleshooting skills, product line analysis.
  • Follower of agile methodology (Sprint planning, working on JIRA, retrospective etc).
  • Proficiency in usage of tools like Docker, Maven, Jenkins and knowledge on frameworks in Java like spring, spring boot, hibernate, JPA.
  • Ability to design application modules using various concepts like object oriented, multi-threading, synchronization, caching, fault tolerance, sockets, various IPCs, database interfaces etc.
  • Hands on experience on Redis, MySQL and streaming technologies like Kafka producer consumers and NoSQL databases like mongo dB/Cassandra.
  • Knowledge about versioning like Git and deployment processes like CICD.

What’s in it for you?

 

  • Immense growth, continuous learning and deliver the best to the top-notch brands
  • Work with some of the most innovative brains
  • Opportunity to explore your entrepreneurial mind-set
  • Open culture where your creative bug gets activated.

 

If this sounds like a company you would like to be a part of, and a role you would thrive in, please don’t hold back from applying! We need your unique perspective for our continued innovation and success!

So let’s converse! Our inquisitive nature is all keen to know more about you.

Skills

JAVA, MONGO, Redis, Cassandra, Kafka, rabbitMQ


 

Read more
Play Games24x7

at Play Games24x7

2 recruiters
Agency job
via Zyoin Web Private Limited by Vishali Vashnavi
Bengaluru (Bangalore)
8 - 12 yrs
₹40L - ₹50L / yr
skill iconJava
J2EE
skill iconPostgreSQL
MySQL
skill iconMongoDB
+19 more
Requirements:
• B. E. /B. Tech. in Computer Science or MCA from a reputed university.
• 3.5 plus years of experience in software development, with emphasis on JAVA/J2EE Server side
programming.
• Hands on experience in core Java, multithreading, RMI, socket programing, JDBC, NIO, webservices
and design patterns.
• Knowledge of distributed system, distributed caching, messaging frameworks, ESB etc.
• Experience in Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database.
• Additionally, knowledge of HBase, Hadoop and Hive is desirable.
• Familiarity with message queue systems and AMQP and Kafka is desirable.
• Experience as a participant in agile methodologies.
• Excellent written and verbal communication skills and presentation skills.
• This is not a fullstack requirement, we are looking for a purely backend expert.
Read more
Series B funded E-commerce startup
Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹15L / yr
skill iconJava
MySQL
skill iconPostgreSQL
NOSQL Databases
skill iconMongoDB
+12 more

Responsibilities:

  • Lead simultaneous development for multiple business verticals.
  • Design & develop highly scalable, reliable, secure, and fault-tolerant systems.
  • Ensure that exceptional standards are maintained in all aspects of engineering.
  • Collaborate with other engineering teams to learn and share best practices.
  • Take ownership of technical performance metrics and strive actively to improve them.
  • Mentors junior members of the team and contributes to code reviews.

 

Requirements:

  • A passion to solve tough engineering/data challenges.
  • Be well versed with cloud computing platforms AWS/GCP
  • Experience with SQL technologies (MySQL, PostgreSQL)
  • Experience working with NoSQL technologies (MongoDB, ElasticSearch)
  • Excellent Programming skills in Python/Java/GoLang
  • Big Data streaming services (Kinesis, Kafka, RabbitMQ)
  • Distributed cache systems(Redis, Memcache)
  • Advanced data solutions(BigQuery, RedShift, DynamoDB, Cassandra)
  • Automated testing frameworks and CI/CD pipelines Infrastructure orchestration(Docker/Kubernetes/Nginx)
  • Cloud-native tech like Lambda, ASG, CDN, ELB, SNS/SQS, S3 Route53 SES
Read more
Perfios
Agency job
via Seven N Half by Susmitha Goddindla
Bengaluru (Bangalore)
4 - 6 yrs
₹4L - ₹15L / yr
SQL
ETL tool
python developer
skill iconMongoDB
skill iconData Science
+15 more
Job Description
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
Read more
Quicken Inc

at Quicken Inc

2 recruiters
Shreelakshmi M
Posted by Shreelakshmi M
Bengaluru (Bangalore)
5 - 8 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
ETL QA
+1 more
  • Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
  • Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
  • Good knowledge of SQL querying.
  • Strong skill in analysing data and uncovering patterns using SQL or Python.
  • Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
  • Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
  • Experience in big data technologies a big plus.
  • Experience in machine learning, especially in data quality applications a big plus.
  • Experience in building data quality automation frameworks a big plus.
  • Strong experience working with an Agile development team with rapid iterations. 
  • Very strong verbal and written communication, and presentation skills.
  • Ability to quickly understand business rules.
  • Ability to work well with others in a geographically distributed team.
  • Keen observation skills to analyse data, highly detail oriented.
  • Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
  • Able to identify stakeholders, build relationships, and influence others to get work done.
  • Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
Read more
QUT

at QUT

Agency job
via Hiringhut Solutions Pvt Ltd by Neha Bhattarai
Bengaluru (Bangalore)
3 - 7 yrs
₹1L - ₹10L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
What You'll Bring

•3+ years of experience in big data & data warehousing technologies
•Experience in processing and organizing large data sets
•Experience with big data tool sets such Airflow and Oozie

•Experience working with BigQuery, Snowflake or MPP, Kafka, Azure, GCP and AWS
•Experience developing in programming languages such as SQL, Python, Java or Scala
•Experience in pulling data from variety of databases systems like SQL Server, maria DB, Cassandra
NOSQL databases
•Experience working with retail, advertising or media data at large scale
•Experience working with data science engineering, advanced data insights development
•Strong quality proponent and thrives to impress with his/her work
•Strong problem-solving skills and ability to navigate complicated database relationships
•Good written and verbal communication skills , Demonstrated ability to work with product
management and/or business users to understand their needs.
Read more
Simpl

at Simpl

3 recruiters
Elish Ismael
Posted by Elish Ismael
Bengaluru (Bangalore)
3 - 10 yrs
₹10L - ₹50L / yr
skill iconJava
Apache Spark
Big Data
Hadoop
Apache Hive
About Simpl
The thrill of working at a start-up that is starting to scale massively is something else. Simpl (FinTech startup of the year - 2020) was formed in 2015 by Nitya Sharma, an investment banker from Wall Street and Chaitra Chidanand, a tech executive from the Valley, when they teamed up with a very clear mission - to make money simple so that people can live well and do amazing things. Simpl is the payment platform for the mobile-first world, and we’re backed by some of the best names in fintech globally (folks who have invested in Visa, Square and Transferwise), and
has Joe Saunders, Ex Chairman and CEO of Visa as a board member.

Everyone at Simpl is an internal entrepreneur who is given a lot of bandwidth and resources to create the next breakthrough towards the long term vision of “making money Simpl”. Our first product is a payment platform that lets people buy instantly, anywhere online, and pay later. In
the background, Simpl uses big data for credit underwriting, risk and fraud modelling, all without any paperwork, and enables Banks and Non-Bank Financial Companies to access a whole new consumer market.
In place of traditional forms of identification and authentication, Simpl integrates deeply into merchant apps via SDKs and APIs. This allows for more sophisticated forms of authentication that take full advantage of smartphone data and processing power

Skillset:
 Workflow manager/scheduler like Airflow, Luigi, Oozie
 Good handle on Python
 ETL Experience
 Batch processing frameworks like Spark, MR/PIG
 File formats: parquet, JSON, XML, thrift, avro, protobuff
 Rule engine (drools - business rule management system)
 Distributed file systems like HDFS, NFS, AWS, S3 and equivalent
 Built/configured dashboards

Nice to have:
 Data platform experience for eg: building data lakes, working with near - realtime
applications/frameworks like storm, flink, spark.
 AWS
 File encoding types: Thrift, Avro, Protobuff, Parquet, JSON, XML
 HIVE, HBASE
Read more
xpressbees
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Artificial Intelligence (AI)
+6 more
Company Profile
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.

Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.

● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹17L - ₹40L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconAmazon Web Services (AWS)
skill iconPython
+4 more
Role: Machine Learning Engineer

As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.

Basic Qualifications

• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
Read more
TartanHQ Solutions Private Limited
Prabhat Shobha
Posted by Prabhat Shobha
Bengaluru (Bangalore)
2 - 4 yrs
₹9L - ₹15L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
skill iconPython
+4 more

Key deliverables for the Data Science Engineer would be to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products.

What will you do?

  • You will be building and deploying ML models to solve specific business problems related to NLP, computer vision, and fraud detection.
  • You will be constantly assessing and improving the model using techniques like Transfer learning
  • You will identify valuable data sources and automate collection processes along with undertaking pre-processing of structured and unstructured data
  • You will own the complete ML pipeline - data gathering/labeling, cleaning, storage, modeling, training/testing, and deployment.
  • Assessing the effectiveness and accuracy of new data sources and data gathering techniques.
  • Building predictive models and machine-learning algorithms to apply to data sets.
  • Coordinate with different functional teams to implement models and monitor outcomes.
  •  
  • Presenting information using data visualization techniques and proposing solutions and strategies to business challenges


We would love to hear from you if :

  • You have 2+ years of experience as a software engineer at a SaaS or technology company
  • Demonstrable hands-on programming experience with Python/R Data Science Stack
  • Ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python
  • Familiarity with Big Data Platforms (Databricks, Hadoop, Hive), AWS Services (AWS, Sagemaker, IAM, S3, Lambda Functions, Redshift, Elasticsearch)
  • Experience in Probability and Statistics, ability to use ideas of Data Distributions, Hypothesis Testing and other Statistical Tests.
  • Demonstrable competency in Data Visualisation using the Python/R Data Science Stack.
  • Preferable Experience Experienced in web crawling and data scraping
  • Strong experience in NLP. Worked on libraries such as NLTK, Spacy, Pattern, Gensim etc.
  • Experience with text mining, pattern matching and fuzzy matching

Why Tartan?
  • Brand new Macbook
  • Stock Options
  • Health Insurance
  • Unlimited Sick Leaves
  • Passion Fund (Invest in yourself or your passion project)
  • Wind Down
Read more
RedSeer Consulting

at RedSeer Consulting

2 recruiters
Raunak Swarnkar
Posted by Raunak Swarnkar
Bengaluru (Bangalore)
0 - 2 yrs
₹10L - ₹15L / yr
skill iconPython
PySpark
SQL
pandas
Cloud Computing
+2 more

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
Quess Corp Limited

at Quess Corp Limited

6 recruiters
Anjali Singh
Posted by Anjali Singh
Noida, Delhi, Gurugram, Ghaziabad, Faridabad, Bengaluru (Bangalore), Chennai
5 - 8 yrs
₹1L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconPython
Big Data
Data processing
Data Visualization

GCP  Data Analyst profile must have below skills sets :

 

Read more
APT Portfolio

at APT Portfolio

1 recruiter
Ankita  Pachauri
Posted by Ankita Pachauri
Delhi, Gurugram, Bengaluru (Bangalore)
10 - 15 yrs
₹50L - ₹70L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets. 


As a manager, you would be incharge of managing the devops team and your remit shall include the following

  • Private Cloud - Design & maintain a high performance and reliable network architecture to support  HPC applications
  • Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN  Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud. 
  • Security - Implementing best security practices and implementing data isolation policy between different divisions internally. 
  • Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis. 
  • Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
  • NFS - Implement and optimize latest version of NFS for our use case. 
  • Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm. 
  • BackUps  - Identify and automate  back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless. 
  •  Access Control  - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins. 
  •  Operating System  -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions. 
  •  Configuration management  -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same. 
  •  Data Storage & Security Planning  - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
  • Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity. 
  • Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant   environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc


Qualifications 

  • Bachelors or Masters Level Degree, preferably in CSE/IT
  • 10+ years of relevant experience in sys-admin function
  • Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
  • Must have strong grasp of automation & Data management tools.
  • Efficient in scripting languages and python


Desirables

  • Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
  •  Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.

 

APT Portfolio is an equal opportunity employer

Read more
Impetus

at Impetus

3 recruiters
Agency job
via Impetus by Gangadhar TM
Bengaluru (Bangalore), Pune, Hyderabad, Indore, Noida, Gurugram
10 - 16 yrs
₹30L - ₹50L / yr
Big Data
Data Warehouse (DWH)
Product Management

Job Title: Product Manager

 

Job Description

Bachelor or master’s degree in computer science or equivalent experience.
Worked as Product Owner before and took responsibility for a product or project delivery.
Well-versed with data warehouse modernization to Big Data and Cloud environments.
Good knowledge* of any of the Cloud (AWS/Azure/GCP) – Must Have
Practical experience with continuous integration and continuous delivery workflows.
Self-motivated with strong organizational/prioritization skills and ability to multi-task with close attention to detail.
Good communication skills
Experience in working within a distributed agile team
Experience in handling migration projects – Good to Have
 

*Data Ingestion, Processing, and Orchestration knowledge

 

Roles & Responsibilities


Responsible for coming up with innovative and novel ideas for the product.
Define product releases, features, and roadmap.
Collaborate with product teams on defining product objectives, including creating a product roadmap, delivery, market research, customer feedback, and stakeholder inputs.
Work with the Engineering teams to communicate release goals and be a part of the product lifecycle. Work closely with the UX and UI team to create the best user experience for the end customer.
Work with the Marketing team to define GTM activities.
Interface with Sales & Customer teams to identify customer needs and product gaps
Market and competition analysis activities.
Participate in the Agile ceremonies with the team, define epics, user stories, acceptance criteria
Ensure product usability from the end-user perspective

 

Mandatory Skills

Product Management, DWH, Big Data

Read more
Top 3 Fintech Startup
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
6 - 9 yrs
₹16L - ₹24L / yr
SQL
skill iconAmazon Web Services (AWS)
Spark
PySpark
Apache Hive

We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.

 

Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree

 

Job Responsibilities:

• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team

• Have minimum 3 years of AWS Cloud experience.

• Well versed in languages such as Python, PySpark, SQL, NodeJS etc

• Has extensive experience in the real-timeSpark ecosystem and has worked on both real time and batch processing

• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.

• Experience with modern Database systems such as Redshift, Presto, Hive etc.

• Worked on building data lakes in the past on S3 or Apache Hudi

• Solid understanding of Data Warehousing Concepts

• Good to have experience on tools such as Kafka or Kinesis

• Good to have AWS Developer Associate or Solutions Architect Associate Certification

• Have experience in managing a team

Read more
hiring for a leading client
Agency job
via Jobaajcom by Saksham Agarwal
Bengaluru (Bangalore)
1 - 3 yrs
₹12L - ₹15L / yr
Big Data
Apache Hadoop
Apache Impala
Apache Kafka
Apache Spark
+5 more
We are seeking a self motivated Software Engineer with hands-on experience to build sustainable data solutions, identifying and addressing performance bottlenecks, collaborating with other team members, and implementing best practices for data engineering. Our engineering process is fully agile, and has a really fast release cycle - which keeps our environment very energetic and fun.

What you'll do:

Design and development of scalable applications.
Collaborate with tech leads to get maximum understanding of underlying infrastructure.
Contribute to continual improvement by suggesting improvements to the software system.
Ensure high scalability and performance
You will advocate for good, clean, well documented and performing code; follow standards and best practices.
We'd love for you to have:

Education: Bachelor/Master Degree in Computer Science
Experience: 1-3 years of relevant experience in BI/Big-Data with hands-on coding experience
Mandatory Skills

Strong in problem-solving
Good exposure to Big Data technologies, Hive, Hadoop, Impala, Hbase, Kafka, Spark
Strong experience of Data Engineering
Able to comprehend challenges related to Database and Data Warehousing technologies and ability to understand complex design, system architecture
Experience with the software development lifecycle, design, develop, review, debug, document, and deliver (especially in a multi-location organization)
Working knowledge of Java, python
Desired Skills

Experience with reporting tools like Tableau, QlikView
Awareness of CI-CD pipeline
Inclination to work on cloud platform ex:- AWS
Crisp communication skills with team members, Business owners.
Be able to work in a challenging, dynamic environment and meet tight deadlines
Read more
Bengaluru (Bangalore), Pune, Hyderabad
4 - 6 yrs
₹6L - ₹22L / yr
Apache HBase
Apache Hive
Apache Spark
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
+5 more
Urgently require Hadoop Developer in reputed MNC company

Location: Bangalore/Pune/Hyderabad/Nagpur

4-5 years of overall experience in software development.
- Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms
- Experience on Hive, Pig, Sqoop, Flume and/or Mahout
- Experience on NO-SQL – HBase, Cassandra, MongoDB
- Hands on experience with Spark development,  Knowledge of Storm, Kafka, Scala
- Good knowledge of Java
- Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
- Knowledge around any Data Integration and/or EDW tools is plus
- Good to have knowledge of  using Python/Perl/Shell

 

Please note - Hbase hive and spark are must.

Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters

Vamsikrishna G
Posted by Vamsikrishna G
Bengaluru (Bangalore)
2 - 10 yrs
₹5L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+1 more
Job Description:

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
Impetus Technologies

at Impetus Technologies

1 recruiter
Gangadhar T.M
Posted by Gangadhar T.M
Bengaluru (Bangalore), Hyderabad, Pune, Indore, Gurugram, Noida
10 - 17 yrs
₹25L - ₹50L / yr
Product Management
Big Data
Data Warehouse (DWH)
ETL
Hi All, 
Greetings! We are looking for Product Manager for our Data modernization product. We need a resource with good knowledge on Big Data/DWH. should have strong Stakeholders management and Presentation skills
Read more
US Based Product Organization
Bengaluru (Bangalore)
10 - 15 yrs
₹25L - ₹45L / yr
Hadoop
HDFS
Apache Hive
Zookeeper
Cloudera
+8 more

Responsibilities :

  • Provide Support Services to our Gold & Enterprise customers using our flagship product suits. This may include assistance provided during the engineering and operations of distributed systems as well as responses for mission-critical systems and production customers.
  • Lead end-to-end delivery and customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product
  • Lead and mentor others about concurrency, parallelization to deliver scalability, performance, and resource optimization in a multithreaded and distributed environment
  • Demonstrate the ability to actively listen to customers and show empathy to the customer’s business impact when they experience issues with our products


Requires Skills :

  • 10+ years of Experience with a highly scalable, distributed, multi-node environment (100+ nodes)
  • Hadoop operation including Zookeeper, HDFS, YARN, Hive, and related components like the Hive metastore, Cloudera Manager/Ambari, etc
  • Authentication and security configuration and tuning (KNOX, LDAP, Kerberos, SSL/TLS, second priority: SSO/OAuth/OIDC, Ranger/Sentry)
  • Java troubleshooting, e.g., collection and evaluation of jstacks, heap dumps
  • Linux, NFS, Windows, including application installation, scripting, basic command line
  • Docker and Kubernetes configuration and troubleshooting, including Helm charts, storage options, logging, and basic kubectl CLI
  • Experience working with scripting languages (Bash, PowerShell, Python)
  • Working knowledge of application, server, and network security management concepts
  • Familiarity with virtual machine technologies
  • Knowledge of databases like MySQL and PostgreSQL,
  • Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes is a big plus
Read more
US Based Product Organization
Agency job
via e-Hireo by Biswajit Banik
Bengaluru (Bangalore)
4 - 8 yrs
₹15L - ₹35L / yr
skill iconKubernetes
Terraform
skill iconDocker
DevOps
skill iconAmazon Web Services (AWS)
+7 more

Roles & Responsibilities :

  • Champion engineering and operational excellence.
  • Establish a solid infrastructure framework and excellent development and deployment processes.
  • Provide technical guidance to both your team members and your peers from the development team.
  • Work with the development teams closely to gather system requirements, new service proposals and large system improvements and come up with the infrastructure architecture leading to stable, well-monitored fly, performant and secure systems.
  • Be part of and help create a positive work environment based on accountability.
  • Communicate across functions and drive engineering initiatives.
  • Initiate cross team collaboration with product development teams to develop high quality, polished products, and services.

Required Skills :

  • 5+ years of professional experience developing and launching software products on Cloud.
  • Basic understanding Java/Go Programming
  • Good Understanding of Container Technologies/Orchestration platforms (e. g Docker, Kubernetes)
  • Deep understanding of AWS or Any Cloud.
  • Good understanding of data stores like Postgres, Redis, Kafka, and Elasticsearch.
  • Good Understanding of Operating systems
  • Strong technical background with track record of individual technical accomplishments
  • Ability to handle multiple competing priorities in a fast-paced environment
  • Ability to establish credibility with smart engineers quickly.
  • Most importantly, ability to learn and urge to learn new things.
  • B.Tech/M.Tech in Computer Science or a related technical field.
Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
Quizizz

at Quizizz

1 video
4 recruiters
Sangram Matkar
Posted by Sangram Matkar
Bengaluru (Bangalore)
3.5 - 10 yrs
₹20L - ₹70L / yr
skill iconMongoDB
Mongoose
skill iconJava
skill iconNodeJS (Node.js)
MySQL
+3 more

Overview

At Quizizz, we are building engaging learning experiences for teachers and students. Our mission is to motivate every learner and help them achieve their true potential. We are among one of the fastest-growing Edtech platforms globally and are currently used by 80% of the US schools every month, serving over 70M monthly actives around the world. Our goal is to reach every classroom around the world.

Quizizz has a passionate user base—We have organically reached this scale. Our NPS is 82 and our users love and deeply care about the platform. Check out our https://twitter.com/search?f=tweets&;vertical=default&q=quizizz&src=typd" target="_blank">Twitter page to see all the love shared by our users.

We are a venture-backed and profitable company creating a meaningful impact in K-12 education. We are a small, passionate, global team working on challenging problems to improve education. Look forward to using the latest technologies, fast development, and interacting with users to build delightful product experiences.

 

What the future holds

  • You will architect, build, scale, backend system that powers our applications which are loved and used by millions of learners every day.
  • You will work closely with a cross-functional team consisting of strategy, design, development, and program managers to determine how to design scalable backend systems and APIs to meet their needs.
  • You possess a passion for improving techniques, processes, tracking, and continuously improving our engineering practices.

Requirements

  • You will be utilizing technologies like Redis and Websockets to create a high-reliability, low-latency API that will handle massive multiplayer sessions.
  • You will be building a database layer using MongoDB to efficiently store/retrieve large amounts of data.
  • You will be setting up suitable backup and redundancy measures for maintaining 100% uptime for all our mission-critical services.
  • You will be logging all data, and setting up jobs to generate analytics reports that will drive all our decisions.

Benefits

At Quizizz, we have built a world-class team of talented individuals. While we all care deeply about our work, we also ensure that we maintain a healthy work-life balance. Our policies are designed to ensure the well-being and comfort of our employees. Some of the benefits we offer include:

    • Healthy work-life balance. Put in your 8 hours, and enjoy the rest of your day.
    • Flexible leave policy. Take time off when you need it.
    • Comprehensive health coverage of Rs. 6 lakhs, covering the employee and their parents, spouse and children. Pre-existing conditions are covered from day 1, and also benefits like free doctor consultations and more.
    • Relocation support including travel and accommodation, and we'll also pay for a broker to find your home in Bangalore!
    • Rs. 20,000 annual health and wellness allowance.
    • Professional development support. We will reimburse you for relevant courses and books that you need to become a better professional.
    • Delicious Meals including breakfast and lunch served at office, and a fully-stocked pantry for all your snacking needs.

 

 
 
Read more
Gurugram, Pune, Bengaluru (Bangalore), Delhi, Noida, Ghaziabad, Faridabad
2 - 9 yrs
₹8L - ₹20L / yr
skill iconPython
Hadoop
Big Data
Spark
Data engineering
+3 more

Key Responsibilities : ( Data Developer Python, Spark)

Exp : 2 to 9 Yrs 

Development of data platforms, integration frameworks, processes, and code.

Develop and deliver APIs in Python or Scala for Business Intelligence applications build using a range of web languages

Develop comprehensive automated tests for features via end-to-end integration tests, performance tests, acceptance tests and unit tests.

Elaborate stories in a collaborative agile environment (SCRUM or Kanban)

Familiarity with cloud platforms like GCP, AWS or Azure.

Experience with large data volumes.

Familiarity with writing rest-based services.

Experience with distributed processing and systems

Experience with Hadoop / Spark toolsets

Experience with relational database management systems (RDBMS)

Experience with Data Flow development

Knowledge of Agile and associated development techniques including:

Read more
Fintech Company
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹12L / yr
skill iconPython
SQL
Data Warehouse (DWH)
Hadoop
skill iconAmazon Web Services (AWS)
+7 more

Purpose of Job:

Responsible for drawing insights from many sources of data to answer important business
questions and help the organization make better use of data in their daily activities.


Job Responsibilities:

We are looking for a smart and experienced Data Engineer 1 who can work with a senior
manager to
⮚ Build DevOps solutions and CICD pipelines for code deployment
⮚ Build unit test cases for APIs and Code in Python
⮚ Manage AWS resources including EC2, RDS, Cloud Watch, Amazon Aurora etc.
⮚ Build and deliver high quality data architecture and pipelines to support business
and reporting needs
⮚ Deliver on data architecture projects and implementation of next generation BI
solutions
⮚ Interface with other teams to extract, transform, and load data from a wide variety
of data sources
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with focus on data science and
quantitative fields (CS, Eng, Math, Eco)
Work Experience: Proven 1+ years of experience in data mining (SQL, ETL, data
warehouse, etc.) and using SQL databases

 

Skills
Technical Skills
⮚ Proficient in Python and SQL. Familiarity with statistics or analytical techniques
⮚ Data Warehousing Experience with Big Data Technologies (Hadoop, Hive,
Hbase, Pig, Spark, etc.)
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium,
Postman, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Excellent storyteller and communicator
⮚ Design Thinking

Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore), Chennai
10 - 14 yrs
₹40L - ₹60L / yr
Engineering Management
skill iconPython
Spark
skill iconJava
Big Data
+3 more

Job Title: Engineering Manager

Job Location: Chennai, Bangalore
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.

Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
10 - 15 yrs
₹40L - ₹70L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Big Data
Greetings!
 
Please find the role details below and confirm your interest.

 

Founder and CEO- https://www.linkedin.com/in/rconline" target="_blank">linkedin.com/in/rconline

Fundings Discussion and Interviews-  https://www.youtube.com/watch?v=5J6jDS7mxXk&;t=2s" target="_blank">https://www.youtube.com/watch?v=5J6jDS7mxXk&t=2s

Acceldata Demo- https://www.youtube.com/watch?v=icveuPmfypM" target="_blank">https://www.youtube.com/watch?v=icveuPmfypM

Co-Founder- https://www.linkedin.com/in/w1nash" target="_blank">linkedin.com/in/w1nash 

Co-Founder- https://www.linkedin.com/in/raghumitra" target="_blank">linkedin.com/in/raghumitra
 
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
 
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data operations platform that focuses on optimizing modern data lakes for both on-premise and cloud environments.

Product Responsibilities:

 

As an Engineering Manager, you will be responsible for the end-to-end delivery of one or more products and solutions.

10+ years of relevant industry experience.

Be technically strong and deliver product excellence.

Collaborate with your team and cross-functional partners to define and influence strategy.

Drive roadmap creation and execution.

Collaborate with various functions, drive engineering initiatives, and have an impact at an organizational level.

Participate in technical design. You will be the owner of the technical direction of the products.

Define clear OKRs and measure the impact of your team and individual members.

Evangelize your products across various teams and also to external customers.

Work with the GTM team to deliver success for the products.

Collaborate with the Recruitment/HR team to hire exceptional talent for your own team and also for other teams.

 

Minimum Qualifications:

 

Bachelor degree preferably in Computer Science from a reputed engineering college.

10+ years of proven experience, with 2+ years of leading a team.

Very strong in hands-on coding.

8+ years of combined experience in JVM languages(Java, Scala, Kotlin, etc), any scripting language like python, relational and non-relational databases.

Very strong in system design. You should have designed and implemented large-scale Web applications or data systems.

Strong verbal and written communication skills.

Demonstrated experience recruiting and managing technical teams, including performance management.

 

 

Good to have:

Don't worry, we'll ramp you up, but experience in any of these areas will help you hit the ground running:

Familiarity with technologies such as Apache Spark, Apache Hadoop ecosystem, and modern warehouses like Snowflake, AWS Redshift, etc.

Experience working with public cloud systems such as AWS, GCP, or Azure.

Experience with containerization and technologies such as Kubernetes and Docker.

 Knowledge of large-scale, data-heavy distributed systems.

Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
10 - 14 yrs
₹30L - ₹65L / yr
skill iconJava
J2EE
skill iconSpring Boot
Hibernate (Java)
Big Data
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Engineering team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
 
As a principal engineer, you will own the core Acceldata platform data engine. We’re looking for people with a strong background or inclination towards data engineering; you’re comfortable in dealing with lots of moving pieces and have a solid understanding of modern Data management systems
              

You will:          

  • Be actively involved in strategic direction and product decisions.
  • Architect and design new services and features
  • Be the go-to person for debugging performance issues on the platform and customer environments
  • Dive deep into open source data engines and work on optimizing their performance.
  • Design, build, and maintain low latency APIs.
  • Mentor and guide the engineering team on best practices and software architecture
  • Work closely with our customers and sales teams on a regular basis to carve out new features and use cases 
  • Develop services that will be consumed by frontend and solution engineers.
                                 

You need:      

  • 12+ years of strong development experience in one or more general programming languages. JVM (Java, Scala, Kotlin) or Golang
  • Strong Computer Science fundamentals in data structures, algorithm design, and problem-solving.
  • Strong distributed systems knowledge, and experience shipping enterprise software  
  • SQL mastery and overall experience working with Data storage and retrieval systems 
  • Organized, thorough, and detail-oriented

Good to have:

  • Contribution to open-source projects.

  • Background in enterprise software
  • JVM performance tuning and debugging
Read more
VUMONIC

at VUMONIC

2 recruiters
Simran Bhullar
Posted by Simran Bhullar
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹7.5L / yr
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)
skill iconElastic Search
+3 more

Designation : DevOp Engineer

Location : HSR, Bangalore


About the Company


Making impact driven by Data. 


Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time.. 



About the Role

 

We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.



What you’ll do


  • Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
  • Monitor Logs on servers & Cloud management
  • Build and set up new development tools and infrastructure to reduce occurrences of errors 
  • Understand the needs of stakeholders and convey this to developers
  • Design scripts to automate and improve development and release processes
  • Test and examine codes written by others and analyze results
  • Ensure that systems are safe and secure against cybersecurity threats
  • Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
  • Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
  • Design procedures for system troubleshooting and maintenance.


What you need to have


TECHNICAL SKILLS

  • Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
  • Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
  • Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
  • 2 or more years of experience as a DevOp (startup/technical experience preferred)

You are

  • Excited to learn, are a hustler and “Do-er”
  • Passionate about building products that create impact.
  • Updated with the latest technological developments & enjoy upskilling yourself with market trends.
  • Willing to experiment with novel ideas & take calculated risks.
  • Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
  • Interested to work as part of a supportive, highly motivated and fun team.
Read more
Impetus

at Impetus

3 recruiters
Rohit Agrawal
Posted by Rohit Agrawal
Remote, Bengaluru (Bangalore), Noida, Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
skill iconPython
PySpark
Data engineering
Big Data
Hadoop
+2 more
  • Experience of providing technical leadership in the Big Data space (Hadoop Stack like Spark, M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Should have contributed to open source Big Data technologies.
  • Expert-level proficiency in Python
  • Experience in visualizing and evangelizing next-generation infrastructure in Big Data space (Batch, Near Real-time, Real-time technologies).
  • Passionate for continuous learning, experimenting, applying, and contributing towards cutting edge open source technologies and software paradigms
  • Strong understanding and experience in distributed computing frameworks, particularly Apache Hadoop 2.0 (YARN; MR & HDFS) and associated technologies.
  • Hands-on experience with Apache Spark and its components (Streaming, SQL, MLLib)
    Operating knowledge of cloud computing platforms (AWS, especially EMR, EC2, S3, SWF services, and the AWS CLI)
  • Experience working within a Linux computing environment, and use of command-line tools including knowledge of shell/Python scripting for automating common tasks
  •  
Read more
Zycus

at Zycus

10 recruiters
Siddharth Shilimkar
Posted by Siddharth Shilimkar
Mumbai, Bengaluru (Bangalore), Pune
14 - 20 yrs
₹15L - ₹40L / yr
Engineering Management
Engineering Manager
Engineering Director
Engineering Head
VP of Engineering
+32 more

We are looking for a Director of Engineering to lead one of our key product engineering teams. This role will report directly to the VP of Engineering and will be responsible for successful execution of the company's business mission through development of cutting-edge software products and solutions.

  • As an owner of the product you will be required to plan and execute the product road map and provide technical leadership to the engineering team.
  • You will have to collaborate with Product Management and Implementation teams and build a commercially successful product.
  • You will be responsible to recruit & lead a team of highly skilled software engineers and provide strong hands on engineering leadership.
  • Requirement deep technical knowledge in Software Product Engineering using Java/J2EE, Node.js, React.js, fullstack, NosqlDB, mongodb, cassandra, neo4j, elastic search, kibana, elk, kafka, redis, docker, kubernetes, apache, solr, activemq, rabbitmq, spark, scala, sqoop, hbase, hive, websocket, webcrawler, springboot, etc. is a must

Requirements

16+ years of experience in Software Engineering with at least 5+ years as an engineering leader in a software product company.

  • Hands-on technical leadership with proven ability to recruit high performance talent
  • High technical credibility - ability to audit technical decisions and push for the best solution to a problem.
  • Experience building E2E Application right from backend database to persistent layer.
  • Experience UI technologies Angular, react.js, Node.js or fullstack environment will be preferred.
  • Experience with NoSQL technologies (MongoDB, Cassandra, Neo4j, Dynamodb, etc.)
  • Elastic Search, Kibana, ELK, Logstash.
  • Experience in developing Enterprise Software using Agile Methodology.
  • Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc.
  • SaaS cloud-based platform exposure.
  • Experience on Docker, Kubernetes etc.
  • Ownership E2E design development and also quality enterprise product/application deliverable exposure
  • A track record of setting and achieving high standards
  • Strong understanding of modern technology architecture
  • Key Programming Skills: Java, J2EE with cutting edge technologies
  • Excellent team building, mentoring and coaching skills are a must-have

Benefits

Five Reasons Why You Should Join Zycus

  1. Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React.
  2. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites.
  3. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization
  4. Get a Global Exposure: You get to work and deal with our global customers.
  5. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features.

About Us

Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users.

Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization.

Start your #CognitiveProcurement journey with us, as you are #MeantforMore.

 

Click here to Apply :

 

https://apply.workable.com/zycus-1/j/D926111745/">Director of Engineering - Zycus (workable.com) - Mumbai.

https://apply.workable.com/zycus-1/j/90665BFD4C/">Director of Engineering - Zycus (workable.com) - Bengaluru.

https://apply.workable.com/zycus-1/j/3A5FBA2C7C/">Director of Engineering - Zycus (workable.com) - Pune.

 

Read more
Ganit Business Solutions

at Ganit Business Solutions

3 recruiters
Viswanath Subramanian
Posted by Viswanath Subramanian
Chennai, Bengaluru (Bangalore), Mumbai
4 - 6 yrs
₹7L - ₹15L / yr
SQL
skill iconAmazon Web Services (AWS)
Data Warehouse (DWH)
Informatica
ETL
+1 more

Responsibilities:

  • Must be able to write quality code and build secure, highly available systems.
  • Assemble large, complex datasets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing datadelivery, re-designing infrastructure for greater scalability, etc with the guidance.
  • Create datatools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Monitoring performance and advising any necessary infrastructure changes.
  • Defining dataretention policies.
  • Implementing the ETL process and optimal data pipeline architecture
  • Build analytics tools that utilize the datapipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Create design documents that describe the functionality, capacity, architecture, and process.
  • Develop, test, and implement datasolutions based on finalized design documents.
  • Work with dataand analytics experts to strive for greater functionality in our data
  • Proactively identify potential production issues and recommend and implement solutions

Skillsets:

  • Good understanding of optimal extraction, transformation, and loading of datafrom a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Proficient understanding of distributed computing principles
  • Experience in working with batch processing/ real-time systems using various open-source technologies like NoSQL, Spark, Pig, Hive, Apache Airflow.
  • Implemented complex projects dealing with the considerable datasize (PB).
  • Optimization techniques (performance, scalability, monitoring, etc.)
  • Experience with integration of datafrom multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB, etc.,
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks
  • Creation of DAGs for dataengineering
  • Expert at Python /Scala programming, especially for dataengineering/ ETL purposes
Read more
MOBtexting

at MOBtexting

1 recruiter
Nandhini Beke
Posted by Nandhini Beke
Bengaluru (Bangalore)
3 - 4 yrs
₹5L - ₹6L / yr
MySQL
MySQL DBA
Data architecture
SQL
Cassandra
+1 more

Job Description

 

Experience: 3+ yrs

We are looking for a MySQL DBA who will be responsible for ensuring the performance, availability, and security of clusters of MySQL instances. You will also be responsible for design of database, database architecture, orchestrating upgrades, backups, and provisioning of database instances. You will also work in tandem with the other teams, preparing documentations and specifications as required.

 

Responsibilities:

Database design and data architecture

Provision MySQL instances, both in clustered and non-clustered configurations

Ensure performance, security, and availability of databases

Prepare documentations and specifications

Handle common database procedures, such as upgrade, backup, recovery, migration, etc.

Profile server resource usage, optimize and tweak as necessary

 

Skills and Qualifications:

Proven expertise in database design and data architecture for large scale systems

Strong proficiency in MySQL database management

Decent experience with recent versions of MySQL

Understanding of MySQL's underlying storage engines, such as InnoDB and MyISAM

Experience with replication configuration in MySQL

Knowledge of de-facto standards and best practices in MySQL

Proficient in writing and optimizing SQL statements

Knowledge of MySQL features, such as its event scheduler

Ability to plan resource requirements from high level specifications

Familiarity with other SQL/NoSQL databases such as Cassandra, MongoDB, etc.

Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases

Read more
AI-powered Growth Marketing platform
Mumbai, Bengaluru (Bangalore)
2 - 7 yrs
₹8L - ₹25L / yr
skill iconJava
NOSQL Databases
skill iconMongoDB
Cassandra
Apache
+3 more
The Impact You Will Create
  • Build campaign generation services which can send app notifications at a speed of 10 million a minute
  • Dashboards to show Real time key performance indicators to clients
  • Develop complex user segmentation engines which creates segments on Terabytes of data within few seconds
  • Building highly available & horizontally scalable platform services for ever growing data
  • Use cloud based services like AWS Lambda for blazing fast throughput & auto scalability
  • Work on complex analytics on terabytes of data like building Cohorts, Funnels, User path analysis, Recency Frequency & Monetary analysis at blazing speed
  • You will build backend services and APIs to create scalable engineering systems.
  • As an individual contributor, you will tackle some of our broadest technical challenges that requires deep technical knowledge, hands-on software development and seamless collaboration with all functions.
  • You will envision and develop features that are highly reliable and fault tolerant to deliver a superior customer experience.
  • Collaborating various highly-functional teams in the company to meet deliverables throughout the software development lifecycle.
  • Identify and improvise areas of improvement through data insights and research.
What we look for?
  • 2-5 years of experience in backend development and must have worked on Java/shell/Perl/python scripting.
  • Solid understanding of engineering best practices, continuous integration, and incremental delivery.
  • Strong analytical skills, debugging and troubleshooting skills, product line analysis.
  • Follower of agile methodology (Sprint planning, working on JIRA, retrospective etc).
  • Proficiency in usage of tools like Docker, Maven, Jenkins and knowledge on frameworks in Java like spring, spring boot, hibernate, JPA.
  • Ability to design application modules using various concepts like object oriented, multi-threading, synchronization, caching, fault tolerance, sockets, various IPCs, database interfaces etc.
  • Hands on experience on Redis, MySQL and streaming technologies like Kafka producer consumers and NoSQL databases like mongo dB/Cassandra.
  • Knowledge about versioning like Git and deployment processes like CICD.
Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹25L / yr
skill iconJava
skill iconJavascript
J2EE
Hibernate (Java)
skill iconHTML/CSS
+1 more
Responsibilities:
• Design and code the excellent workflow, features, or modules in the Simplify360 suite.
• Tackle challenging engineering and product problems, create solutions to customer's problems.
• Create new ideas with our design teams to continually iterate on the experience.
• Work cross-functionally to evaluate the relative importance of and need for product initiatives.
• Take ownership of modules from design to implementation and deployment.
Requirements
• Great software design and development skills. Deep knowledge of design, coding, and implementation.
• Ability to work both independently and in cooperation with others.
• A sense of urgency and ownership over the product.
• Comfortable with full-stack projects and able to build a minimum working and prototypes quickly.
• Fluency with both front-end (e.g., html/css/javascript, bootstrap, jquery) and back-end technologies used, primarily Core Java, J2EE, Struts, Hibernate.
• Knowledge of Solr, Kafka would be an added advantage.
• Knowledge of Big Data solutions like Hadoop, HBase would be an added advantage.
• Great attitude towards work and people.
Read more
Amagi Media Labs

at Amagi Media Labs

3 recruiters
Rajesh C
Posted by Rajesh C
Bengaluru (Bangalore)
8 - 15 yrs
₹25L - ₹55L / yr
Product Management
skill iconAmazon Web Services (AWS)
Datawarehousing
Product Manager
Big Data
+3 more

Sr Product Manager / Lead Product Manager – Data Platform

 

Description:

At Amagi we are looking for a product leader to build world class big data and analytics platform to help our teams in making data driven decisions and to accelerate our business outcomes.

We are looking for someone who is innovative and experienced in end-to-end product management to drive our long-term data and analytics strategy.

 

The ideal candidate would be responsible for owning product roadmap and KPIs, driving product operational tasks to ensure configurable and scalable solutions.

 

Primary Responsibilities:

  • Lead the product requirements to build real time, highly scalable, low latency data platform.
  • Author PRD and define the strategic roadmap.
  • Lead the Product design, MVPs and POCs and fast track deliveries
  • Collaborate with various functions and design the most effective solutions.
  • Understand customer needs and define the data solutions and insights to drive business outcomes
  • Define key product performance metrices to drive business and customer outcomes

 

Basic Qualification

  • 12+ years of overall SDLC experience with 7+ years of product management experience in fast paced company.
  • Proven experience delivering large scale highly available big data processing systems.
  • Knowledge of data pipeline design, data transformation and integration methodologies.
  • Technically savvy with experience in big data systems like AWS, RedShift, Athena, Kafka and related technologies
  • Demonstrate collaborative approach and ability to work with distributed, cross functional teams.
  • Experience in taking products through full life cycle, from proposal to launch
  • Strategic thinking capabilities to define the product roadmap and right prioritization of the backlog in line with the long-term vision
  • Strong communication and stakeholder management skills with the ability to coordinate across a diverse group of technical and non-technical stakeholders.
  • Ability to deal with ambiguity and use data to solve ambiguous problems
  • Technical ability to understand and discuss software architecture, product integration, non-functional requirements etc. with the Engineering team.

 

Preferred Qualification:

  • Technically savvy with good understanding of cloud application development.
  • Software development experience building Enterprise platforms
  • Experience in third-party vendor assessments
  • Experience working with AI , ML , Bigdata and analytics tools
  • Understanding of regulations such as data privacy, data security and governance
Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹40L / yr
Hadoop
SRE
DevOps
Reliability engineering
Load balancing
+2 more
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
 
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data operations platform that focuses on optimizing modern data lakes for both on-premise and cloud environments.

 

Responsibilities

  • Our Site reliability engineers work on improving the availability, scalability, performance, and reliability of enterprise production services for our products as well as our customer’s data lake environments.
  • You will use your expertise to improve the reliability and performance of Hadoop Data lake clusters and data management services. Just as our products, our SRE are expected to be platform and vendor-agnostic when it comes to implementing, stabilizing, and tuning Hadoop ecosystems.
  • You’d be required to provide implementation guidance, best practices framework, and technical thought leadership to our customers for their Hadoop Data lake implementation and migration initiatives.
  • You need to be 100% hand-on and as a required test, monitor, administer, and operate multiple Data lake clusters across data centers.
  • Troubleshoot issues across the entire stack - hardware, software, application, and network.
  • Dive into problems with an eye to both immediate remediations as well as the follow-through changes and automation that will prevent future occurrences.
  • Must demonstrate exceptional troubleshooting and strong architectural skills and clearly and effectively describe this in both a verbal and written format.

Requirements

  • Customer-focused, Self-driven, and Motivated with a strong work ethic and a passion for problem-solving.
  • 4+ years of designing, implementing, tuning, and managing services in a distributed, enterprise-scale on-premise and public/private cloud environment.
  • Familiarity with infrastructure management and operations lifecycle concepts and ecosystem.
  • Hadoop cluster design, Implementation, management and performance tuning experience with HDFS, YARN,
  • HIVE/IMPALA, SPARK, Kerberos and related Hadoop technologies are a must.
  • Must have strong SQL/HQL query troubleshooting and tuning skills on Hive/HBase.
  • Must have a strong capacity planning experience for Hadoop ecosystems/data lakes.
  • Good to have hands-on experience with – KAFKA, RANGER/SENTRY, NiFi, Ambari, Cloudera Manager, and HBASE.
  • Good to have data modeling, data engineering, and data security experience within the Hadoop ecosystem.Good to have deep JVM/Java debugging and tuning skills.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort