Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Hadoop Jobs in Bangalore (Bengaluru)

Explore top Hadoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineer (Remote/ Bengaluru)

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 200000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job Title: Data Engineer (Remote)   Job Description You will work on:   We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights.    What you will do (Responsibilities): Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines.  You will be responsible for: Dealing with customer data and building highly efficient pipelines Building insights dashboards Troubleshooting data loss, data inconsistency, and other data-related issues Maintaining backend services (written in Golang) for metadata generation Providing prompt support and solutions for Product, CRM, and Marketing partners   What you bring (Skills): 2+ year of experience in data engineering Coding experience with one of the following languages: Golang, Java, Python, C++ Fluent in SQL Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive   Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc.   Advantage Cognologix:  Higher degree of autonomy, startup culture & small teams  Opportunities to become expert in emerging technologies  Remote working options for the right maturity level  Competitive salary & family benefits  Performance based career advancement     About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way.   Skills: JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA   Thanks & regards, Cognologix- HR Dept.

Job posted by
apply for job
apply for job
Rupa Kadam picture
Rupa Kadam
Job posted by
Rupa Kadam picture
Rupa Kadam
Apply for job
apply for job

Data Architect

Founded 2004
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai, Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1100000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Who we are? Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud. What we believe? Best practices are overrated Implementing best practices can only make one n . Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead.  And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self motivated. Self governing teams. We own it. Responsibilities :  As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself. What You’ll Do Understand the business problem and translate these to data services and engineering outcomes Explore new technologies and learn new techniques to solve business problems creatively Collaborate with many teams - engineering and business, to build better data products Manage team and handle delivery of 2-3 projects  What We’re Looking For Over 4-7 years of experience with Hands-on experience of any one programming language (Python, Java, Scala) Understanding of SQL is must Big data (Hadoop, Hive, Yarn, Sqoop) MPP platforms (Spark, Presto) Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi) Streaming engines (Kafka, Storm, Spark Streaming) Any Relational database or DW experience Any ETL tool experience Hands-on experience in pipeline design, ETL and application development Hands-on experience in cloud platforms like AWS, GCP etc. Good communication skills and strong analytical skills Experience in team handling and project delivery

Job posted by
apply for job
apply for job
Nikita Rathi picture
Nikita Rathi
Job posted by
Nikita Rathi picture
Nikita Rathi
Apply for job
apply for job

Data Engineer

Founded 2010
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Strong Experience in designing and developing Big data applications Hadoop, spark/flink/storm, Kafka, Hive, Hbase, java/scala, Airlflow/oozie/Nifi, Redis/hazelcast/experince in cosmos db, azure synapse, azure data factory is a plus

Job posted by
apply for job
apply for job
Sreenivas Dega picture
Sreenivas Dega
Job posted by
Sreenivas Dega picture
Sreenivas Dega
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Roles and Responsibilities:• Responsible for developing and maintaining applications with PySpark  • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications.Must Have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills

Job posted by
apply for job
apply for job
Sudarshini K picture
Sudarshini K
Job posted by
Sudarshini K picture
Sudarshini K
Apply for job
apply for job

Senior Systems Engineer – Big Data

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

Skills Requirements Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning. Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker Knowledge on python would be desirable. Experience with HDP Manager/clients and various dashboards. Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking. Experience with automation/configuration management using Chef, Ansible or an equivalent. Strong experience with any Linux distribution. Basic understanding of network technologies, CPU, memory and storage. Database administration a plus.Qualifications and Education Requirements 2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions anddashboards running on Big Data technologies such as Hadoop/Spark. Bachelor degree or equivalent in Computer Science or Information Technology or related fields.

Job posted by
apply for job
apply for job
prashanta Singh picture
prashanta Singh
Job posted by
prashanta Singh picture
prashanta Singh
Apply for job
apply for job

Backend SDE3/Lead Engineer

Founded 2005
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
via zyoin
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 6000000, duration: "undefined", currency: "INR", equity: false})}}

General Accountabilities/Job Responsibilities• Participation in the requirements analysis, design, development and testing of applications.• The candidate is expected to write code himself/herself.• The candidate is expected to write high level code, code review, unit testing and deployment.• Practical application of design principles with a focus on the user experience, usability, templatedesigns, cross browser issues and client server concepts.• Contributes to the development of project estimates, scheduling, and deliverables.• Works closely with QA team to determine testing requirements to ensure full coverage and bestquality of product.• There is also the opportunity to mentor and guide junior team members in excelling their jobs.Job Specifications• BE/B. Tech. Computer Science or MCA from a reputed University.• 6+ Years of experience in software development, with emphasis on JAVA/J2EE Server sideprogramming.• Hands on experience in Core Java, Multithreading, RMI, Socket programing, JDBC, NIO,webservices and Design patterns.• Should have Knowledge of distributed system, distributed caching, messaging frameworks, ESBetc.• Knowledge of Linux operating system and PostgreSQL/MySQL/MongoDB/Cassandra database isessential.• Additionally, knowledge of HBase, Hadoop and Hive are desirable.• Familiarity with message queue systems and AMQP and Kafka is desirable.• Should have experience as a participant in Agile methodologies.• Should have excellent written and verbal communication skills and presentation skills.• This is not a Fullstack requirement, we are purely looking out for Backend resources

Job posted by
apply for job
apply for job
Suchoritha Zyoin picture
Suchoritha Zyoin
Job posted by
Suchoritha Zyoin picture
Suchoritha Zyoin
Apply for job
apply for job

Bigdata Engineer

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 900000, duration: "undefined", currency: "INR", equity: false})}}

Roles and responsibilities:   Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.. Collaborate with QA team to define test cases, metrics, and resolve questions about test results. Assist in the design and implementation process for new products, research and create POC for possible solutions. Develop components based on business and/or application requirements Create unit tests in accordance with team policies & procedures Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues.

Job posted by
apply for job
apply for job
Silita S picture
Silita S
Job posted by
Silita S picture
Silita S
Apply for job
apply for job

Senior Architect

Founded 2001
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
15 - 20 years
Salary icon
Best in industry{{renderSalaryString({min: 5000000, max: 12000000, duration: "undefined", currency: "INR", equity: false})}}

About the Company, Conviva:Conviva is the leader in streaming media intelligence, powered by its real-time platform. More than 250 industry leaders and brands – including CBS, CCTV, Cirque Du Soleil, DAZN, Disney+, HBO, Hulu, Sky, Sling TV, TED, Univision, and Warner Media – rely on Conviva to maximize their consumer engagement, deliver the quality experiences viewers expect and drive revenue growth. With a global footprint of more than 500 million unique viewers watching 150 billion streams per year across 3 billion applications streaming on devices, Conviva offers streaming providers unmatched scale for continuous video measurement, intelligence and benchmarking across every stream, every screen, every second. Conviva is privately held and headquartered in Silicon Valley, California, with offices around the world. For more information, please visit us at www.conviva.com.What you get to do: Be a thought leader. As one of the senior most technical minds in the India centre, influence our technical evolution journey by pushing the boundaries of possibilities by testing forwarding looking ideas and demonstrating its value. Be a technical leader: Demonstrate pragmatic skills of translating requirements into technical design. Be an influencer. Understand challenges and collaborate across executives and stakeholders in a geographically distributed environment to influence them. Be a technical mentor. Build respect within team. Mentor senior engineers technically andcontribute to the growth of talent in the India centre. Be a customer advocate. Be empathetic to customer and domain by resolving ambiguity efficiently with the customer in mind. Be a transformation agent. Passionately champion engineering best practices and sharing across teams. Be hands-on. Participate regularly in code and design reviews, drive technical prototypes and actively contribute to resolving difficult production issues.What you bring to the role: Thrive in a start-up environment and has a platform mindset. Excellent communicator. Demonstrated ability to succinctly communicate and describe complexvtechnical designs and technology choices both to executives and developers. Expert in Scala coding. JVM based stack is a bonus. Expert in big data technologies like Druid, Spark, Hadoop, Flink (or Akka) & Kafka. Passionate about one or more engineering best practices that influence design, quality of code or developer efficiency. Familiar with building distributed applications using webservices and RESTful APIs. Familiarity in building SaaS platforms on either in-house data centres or public cloud providers.

Job posted by
apply for job
apply for job
Bevin Baby picture
Bevin Baby
Job posted by
Bevin Baby picture
Bevin Baby
Apply for job
apply for job

Sr. SDET (Data engineering)

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industryBest in industry

About BlackHawk Network:Blackhawk Network is building a digital platform and products that bring people and brands together.  We facilitate cross channel payments via cash-in, cash-out and mobile payments. By leveraging blockchain, smart contracts, serverless technology, real time payment systems, we are unlocking the next million users through innovation.   Our employees are our biggest assets!  Come find out how we engage, with the biggest brands in the world.  We look for people who collaborate, who are inspirational, who have passion that can make a difference by working as a team while striving for global excellence. You can expect a strong investment in your professional growth, and a dedication to crafting a successful, sustainable career for you. Our teams are composed of highly talented and passionate 'A' players, who are also invested in mentoring and enabling the best qualities. Our vibrant culture and high expectations will kindle your passion and bring out the best in you!  As a leader in branded payments, we are building a strong diverse team and expanding in ASIA PACIFIC –we are hiring in Bengaluru, India! This is an amazing opportunity for problem solvers who want to be a part of an innovative and creative Engineering team that values your contribution to the company. If this role has your name written all over it, please contact us apply now with a resume so that we explore further and get connected. If you enjoy building world class payment applications, are highly passionate about pushing the boundaries of scale and availability on the cloud, leveraging the next horizon technologies, rapidly deliver features to production, make data driven decisions on product development, collaborate and innovate with like-minded experts, then this would be your ideal job. Blackhawk is seeking passionate backend engineers at all levels to build our next generation of payment systems on a public cloud infrastructure. Our team enjoys working together to contribute to meaningful work seen by millions of merchants worldwide.As a Senior SDET, you will work closely with data engineers to automate developed features and manual testing of the new data ETL Jobs, Data pipelines and Reports. You will be responsible for owning the complete architecture of automation framework and planning and designing automation for data ingestion, transformation and Reporting/Visualization. You will be building high-quality automation frameworks to cover end to end testing of the data platforms and ensure test data setup and pre-empt post production issues by high quality testing in the lower environments. You will get an opportunity to contribute at all levels of the test pyramid. You will also work with customer success and product teams to replicate post-production release issues. Key Qualifications Bachelor’s degree in Computer Science, Engineering or related fields 5+ years of experience testing data ingestion, visualization and info delivery systems. Real passion for data quality, reconciliation and uncovering hard to find scenarios and bugs. Proficiency in at least one programming language (preferably Python/Java) Expertise in end to end ETL (E.g. DataStage, Matillion) and BI platforms (E.g. MicroStrategy, PowerBI) testing and data validation Experience working with big data technologies such as Hadoop and MapReduce is desirable Excellent analytical, problem solving and communication skills. Self-motivated, results oriented and deadline driven. Experience with databases and data visualization and dashboarding tools would be desirable Experience working with Amazon Web Services (AWS) and Redshift is desirable Excellent knowledge of Software development lifecycle, testing Methodologies, QA terminology, processes, and tools Experience with automation using automation frameworks and tools, such as TestNG, JUnit and Selenium

Job posted by
apply for job
apply for job
Sandeep Madhavan picture
Sandeep Madhavan
Job posted by
Sandeep Madhavan picture
Sandeep Madhavan
Apply for job
apply for job

Data Engineer

Founded 2006
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

REQUIREMENT:  Previous experience of working in large scale data engineering  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.  Previous experience of architecting and designing backend for large scale data processing.  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.  Hands-on and have the ability to contribute a key portion of data engineering backend.  Self-inspired and motivated to drive for exceptional results.  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.  Familiarity and experience working with different DB technologies and how to scale them. RESPONSIBILITY:  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.  Build data engineering workflow for large scale data processing.  Discover opportunities in data acquisition.  Bring industry best practices for data engineering workflow.  Develop data set processes for data modelling, mining and production.  Take additional tech responsibilities for driving an initiative to completion  Recommend ways to improve data reliability, efficiency and quality  Goes out of their way to reduce complexity.  Humble and outgoing - engineering cheerleaders.

Job posted by
apply for job
apply for job
Meenu Singh picture
Meenu Singh
Job posted by
Meenu Singh picture
Meenu Singh
Apply for job
apply for job

Spark Scala Developer

Founded 2009
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Spark / Scala experience should be more than 2 years. Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer. Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred Complete SDLC process and Agile Methodology (Scrum) Version control / Git

Job posted by
apply for job
apply for job
Kripa Oza picture
Kripa Oza
Job posted by
Kripa Oza picture
Kripa Oza
Apply for job
apply for job

Data Engineer

Founded 2017
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 0, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.The candidate,1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab8. Knowledge on AWS Quick sight reporting9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.12. Demonstrated successful experience learning new technologies quicklyWHAT WILL BE THE ROLES AND RESPONSIBILITIES?1. Create procedures/run books for operational and security aspects of AWS platform2. Improve AWS infrastructure by developing and enhancing automation methods3. Provide advanced business and engineering support services to end users4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs5. Research and deploy new tools and frameworks to build a sustainable big data platform6. Assist with creating programs for training and onboarding for new end users7. Lead Agile/Kanban workflows and team process work8. Troubleshoot issues to resolve problems9. Provide status updates to Operations product owner and stakeholders10. Track all details in the issue tracking system (JIRA)11. Provide issue review and triage problems for new service/support requests12. Use DevOps automation tools, including Jenkins build jobs13. Fulfil any ad-hoc data or report request queries from different functional groups

Job posted by
apply for job
apply for job
Ameem Iqubal picture
Ameem Iqubal
Job posted by
Ameem Iqubal picture
Ameem Iqubal
Apply for job
apply for job

Sr Data Engineer(SQL / Spark )

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Key Result Areas   ·         Create and maintain optimal data pipeline, ·         Assemble large, complex data sets that meet functional / non-functional business requirements. ·         Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ·         Keep our data separated and secure ·         Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. ·         Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. ·         Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ·         Work with data and analytics experts to strive for greater functionality in our data systems     Knowledge, Skills and Experience   Core Skills: We are looking for a candidate with 5+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: ·         Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce ·         Experience with stream-processing systems: Spark-Streaming, Strom etc. ·         Experience with object-oriented/object function scripting languages: Python, Scala etc ·         Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data ·         Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus ·         Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. ·         Experience with Azure cloud services is a plus ·         Financial Services Knowledge is a plus

Job posted by
apply for job
apply for job
Jayanti M picture
Jayanti M
Job posted by
Jayanti M picture
Jayanti M
Apply for job
apply for job