Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io

Hadoop Jobs in Bangalore (Bengaluru)

Explore top Hadoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Delivery Manager

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
16 - 23 years
Salary icon
Best in industryBest in industry

Experience: 18-21 Yrs Total 6-12 years of experience as BA/Product Owner managing software products with minimum 2 years of Industry relevant experience as a Product Owner At least 3+ years in managing products in the industrial domain driving the product development with market, domain and feature enrichment. Should be able to manage and prioritize feature iterations to make trade-offs affecting schedule, cost, quality and customer expectations. 3+ years of in-depth knowledge of Agile process and principles. Should have exposure to managing Platform Product Backlog. Should have played a leadership role in scrum teams. Should be able to manage and set release expectations for delivery of new functionalities. Should be able to write clear user epics and stories with user acceptance criteria in an Agile environment. Technical Management or hands on background in the following areas will be a great add on E.g: AWS/Azure ML, AWS/Azure data ingestion and preparation services, Tableau/PowerBI, SQL/NoSQL, Row & Columnar databases, Hadoop, Cassandra, Spark, ELK, and related data and machine learning services. Exposure to connected devices and embedded software with understanding of connectivity and security is a must. Sharp analytical and problem-solving skills; attention to details Experience using SDLC (software development lifecycle) practices and frameworks and backlog management tools (e.g. JIRA/Confluence/etc.). Good skills and knowledge of facilitation, situational awareness, conflict resolution, continual improvement, empowerment, and increasing transparency. Ability to work successfully in a team environment. Clear written and verbal communications skills; Attention to detail, self-motivated, creative and flexible; Excellent time management and organizational skills.

Job posted by
apply for job
apply for job
Mastanvali Shaik picture
Mastanvali Shaik
Job posted by
Mastanvali Shaik picture
Mastanvali Shaik
Apply for job
apply for job

Data Engineer (Consultant/Senior consultant)

Founded 1993
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Bengaluru (Bangalore), Coimbatore, NCR (Delhi | Gurgaon | Noida)
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1800000, max: 4000000, duration: "undefined", currency: "INR", equity: false})}}

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution. You’ll spend time on the following: You will partner with teammates to create complex data processing pipelines in order to solve our clients’ most ambitious challenges You will collaborate with Data Scientists in order to design scalable implementations of their models You will pair to write clean and iterative code based on TDD Leverage various continuous delivery practices to deploy data pipelines Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available Develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions Create data models and speak to the tradeoffs of different modeling approaches Here’s what we’re looking for:   You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions You are comfortable taking data-driven approaches and applying data security strategy to solve business problems  Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems Strong communication and client-facing skills with the ability to work in a consulting environment

Job posted by
apply for job
apply for job
sabarinath konath picture
sabarinath konath
Job posted by
sabarinath konath picture
sabarinath konath
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Data Engineer• Drive the data engineering implementation• Strong experience in building data pipelines• AWS stack experience is must• Deliver Conceptual, Logical and Physical data models for the implementationteams.• SQL stronghold is must. Advanced SQL working knowledge and experienceworking with a variety of relational databases, SQL query authoring• AWS Cloud data pipeline experience is must. Data pipelines and data centricapplications using distributed storage platforms like S3 and distributed processingplatforms like Spark, Airflow, Kafka• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,Elasticsearch• Ability to use a major programming (e.g. Python /Java) to process data formodelling.

Job posted by
apply for job
apply for job
geeti gaurav mohanty picture
geeti gaurav mohanty
Job posted by
geeti gaurav mohanty picture
geeti gaurav mohanty
Apply for job
apply for job

Data Lake/ Hadoop Developer

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

Job Title:Data Lake/ Hadoop DeveloperWork Location: Bangalore or Any LocationExperience:3 to 5 YearsPackage:As Per Market StandardNotice Period:Immediate JoinersIts a Full Time Opportunity with Our ClientMandatory Skills:Hadoop, Spark, datalake, Hive, SQL, UNIX & AgileJob Description:--3-5 Years of experience in Hadoop ecosystem, Spark and Data Lake--Candidate must be from development background--Proficient in writing SQL, HiveQL code and Data modelling.--Hands-on experience on UNIX and basic shell scripting.--Knowledge of code migration and version controlling tools like Jenkin, Bitbucket, Github etc."--Experience in developing Data ingestion and integration flows using Big Data ecosystem tools – Hadoop-Hive, Sqoop, Spark, Presto and Hadoop Restful APIs.--Must have 1-2 years of experience in core Java or Scala--Hands-on experience in Spark Data flows using Java / Scala--Conversant in Agile methodology of project execution is preferred

Job posted by
apply for job
apply for job
Venkat B picture
Venkat B
Job posted by
Venkat B picture
Venkat B
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
via Qrata
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 2800000, max: 5600000, duration: "undefined", currency: "INR", equity: false})}}

5+ years of experience in a Data Engineer role. Proficiency in Linux. Must have SQL knowledge and experience working with relational databases,query authoring (SQL) as well as familiarity with databases including Mysql,Mongo, Cassandra, and Athena. Must have experience with Python/Scala.Must have experience with Big Data technologies like Apache Spark. Must have experience with Apache Airflow. Experience with data pipeline and ETL tools like AWS Glue. Experience working with AWS cloud services: EC2, S3, RDS, Redshift.

Job posted by
apply for job
apply for job
Blessy Fernandes picture
Blessy Fernandes
Job posted by
Blessy Fernandes picture
Blessy Fernandes
Apply for job
apply for job

Senior Development Engineer- Platform - Data Science

Founded 2019
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Mumbai
Experience icon
3 - 6 years
Salary icon
Best in industryBest in industry

At Jupiter, we are building a new age personalized mobile-first, AI-based digital bank. We are a fast-paced tech startup that relentlessly innovates each day to make banking more accessible and transparent for our users.   To help us grow, we are looking at awesome team-players to join us and contribute towards building a community first digital bank.   Join us if you wish to build the bank of the future!   About Platform Data Science Team   The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.   About the role   We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you! This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.     What You’ll Need Programming experience with at least one modern language such as Java, Scala including object-oriented desig One+ years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems Bachelor’s degree in Computer Science or related field Three+ years professional experience in software development Computer Science fundamentals in object-oriented design Computer Science fundamentals in data structures Computer Science fundamentals in algorithm design, problem solving, and complexity analysis Experience in databases, analytics, big data systems or business intelligence products: Data lake, data warehouse, ETL, ML platform Big data tech like: Hadoop, Apache Spark

Job posted by
apply for job
apply for job
CHETNA KHOLIA picture
CHETNA KHOLIA
Job posted by
CHETNA KHOLIA picture
CHETNA KHOLIA
Apply for job
apply for job

Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Roles & Responsibilities Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required) Deep understanding of Linux from kernel mechanisms through user space management Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins). Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.  Wide understanding of IP networking as well as data centre infrastructure Skills Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing Strong understanding and must have experience: Apache spark framework, specifically spark core and spark streaming,  Orchestration platforms, mesos and kubernetes,  Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs Core presentation technologies kibana, and grafana. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products Certification Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Job posted by
apply for job
apply for job
Merlin Metilda picture
Merlin Metilda
Job posted by
Merlin Metilda picture
Merlin Metilda
Apply for job
apply for job

Cloud Data Engineer

Founded 1982
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Mandatory: Experience with cloud technologies, particularly AWS Experience with handling large dataset migration (AWS DMS preferred) One-time load, incremental load Good SQL skills & hands on in Python Added advantage: AWS DMS, Athena, EMR, Kinesis Stream, Lambda, AWS Redshift, Apache Spark Event Bridge, Glue, Snowflake, Kafka, Microservice-based architectures is strongly desired 5+ years of experience in Cloud (AWS preferred) & Bigdata technologies Excellent organizational and communication skills

Job posted by
apply for job
apply for job
Shweta Raichur picture
Shweta Raichur
Job posted by
Shweta Raichur picture
Shweta Raichur
Apply for job
apply for job

Data Architect

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.Primary Responsibilities :- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms- Develop and deliver solutions with data streaming capabilities for a large volume of data- Design, code and maintain parts of the product and drive customer adoption- Build data acquisition strategy to onboard customer data with speed and accuracy- Working both independently and with team members to develop, refine, implement, and scale ETL processes- On-going support and maintenance of live-clients for their data and analytics needs- Defining the data automation architecture to drive self-service data load capabilitiesRequired Qualifications :- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc- Familiarity with cloud-based platforms like AWS, Azure or GCP- Experience with cloud data warehouses like Redshift and Snowflake- Proficient in writing complex SQL queries.- Excellent communication skills and prior experience of working closely with customers- Data savvy who loves to understand large data trends and obsessed with data analysis- Desire to learn about, explore, and invent new tools for solving real-world problems using dataDesired Qualifications :- Cloud computing experience, Amazon Web Services (AWS)- Prior experience in Data Warehousing concepts, multi-dimensional data models- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards- Prior experience in managing client implementation of Analytics projects- Knowledge and prior experience of using machine learning tools

Job posted by
apply for job
apply for job
Gowshini Maheswaran picture
Gowshini Maheswaran
Job posted by
Gowshini Maheswaran picture
Gowshini Maheswaran
Apply for job
apply for job

Software Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
via Dremio
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore), Hyderabad
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 6500000, duration: "undefined", currency: "INR", equity: false})}}

Be Part Of Building The Future Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market. About the Role The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems. Responsibilities & ownership Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product. Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing. Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment. Lead the team to solve complex and unknown problems  Solve technical problems and customer issues with technical expertise Design and deliver architectures that run optimally on public clouds like  GCP, AWS, and Azure Mentor other team members for high quality and design  Collaborate with Product Management to deliver on customer requirements and innovation Collaborate with Support and field teams to ensure that customers are successful with Dremio Requirements B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience Fluency in Java/C++ with 8+ years of experience developing production-level software Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully Hands-on experience  in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform Passion for learning and delivering using latest technologies Ability to solve ambiguous, unexplored, and cross-team problems effectively Hands on experience of working projects on AWS, Azure, and Google Cloud Platform  Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)  Understanding of distributed file systems such as  S3, ADLS, or HDFS Excellent communication skills and affinity for collaboration and teamwork Ability to work individually and collaboratively with other team members Ability to scope and plan solution for  big problems and mentors others on the same Interested and motivated to be part of a fast-moving startup with a fun and accomplished team

Job posted by
apply for job
apply for job
Maharaja Subramanian (CW) picture
Maharaja Subramanian (CW)
Job posted by
Maharaja Subramanian (CW) picture
Maharaja Subramanian (CW)
Apply for job
apply for job

Big Data Engineer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 300000, max: 400000, duration: "undefined", currency: "INR", equity: false})}}

You will work on:  We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights.    What you will do (Responsibilities): Collaborate with Data Scientists, Engineers and Product Management to transform raw data to often actionable and meaningful insights for the enterprise Work in small dynamic, product-oriented environment to deliver enterprise class products. Continuously improve software development practices work across the full stack.   What you bring (Skills):   Experience in building modern cloud-native microservices based applications using state of the art frameworks in Java/Kotlin, Spring Boot, Hibernate and other JVM based frameworks. Experience developing elastic, cloud based, BigData applications with modern technologies like Apache Spark, BigQuery, Airflow, Beam, Kafka and ElasticSearch.                Ability to produce easily consumable RESTful APIs with strong living documentation and specification-by-example tests.   Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Ability to work out right strategies of deployment for BigData systems. Collaborate with DevOps and Test Automation teams to build favorable developer experience in both build and CI/CD. Attitude to look at an application holistically and debug any component if required whether it be based on UI technology like JavaScript, HTML or an intricate Linux System based issue.    Advantage Cognologix:  Higher degree of autonomy, startup culture & small teams  Opportunities to become expert in emerging technologies  Remote working options for the right maturity level  Competitive salary & family benefits  Performance based career advancement   About Cognologix:   Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way.

Job posted by
apply for job
apply for job
Payal Jain picture
Payal Jain
Job posted by
Payal Jain picture
Payal Jain
Apply for job
apply for job

Azure Developer – HD insight

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

Working knowledge of setting up and running HD insight applications Hands on experience in Spark, Scala & Hive Hands on experience in ADF – Azure Data Factory Hands on experience in Big Data & Hadoop ECO Systems  Exposure to Azure Service categories like PaaS components and IaaS subscriptions Ability to Design, Develop ingestion & processing frame work for ETL applications Hands on experience in powershell scripting, deployment on Azure Experience in performance tuning and memory configuration Should be adaptable to learn & work on new technologies Should have Communication Good written and spoken

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

Job Description Job Title: Data Engineer Tech Job Family: DACI • Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) • 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering • 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Preferred Qualifications: • Master's Degree in Computer Science, CIS, or related field • 2 years of IT experience developing and implementing business systems within an organization • 4 years of experience working with defect or incident tracking software • 4 years of experience with technical documentation in a software development environment • 2 years of experience working with an IT Infrastructure Library (ITIL) framework • 2 years of experience leading teams, with or without direct reports • Experience with application and integration middleware • Experience with database technologies Data Engineering • 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role) • Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role) BI Engineering • Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role) Platform Engineering • 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role) • Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role) Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Technical Architect

Founded 1995
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
10 - 20 years
Salary icon
Best in industry{{renderSalaryString({min: 4000000, max: 5500000, duration: "undefined", currency: "INR", equity: false})}}

Bachelor's or Master’s degree in Computer Science or equivalent area 10 to 20 years of experience in software development Hands-on experience designing and building B2B or B2C products 3+ years architecting SaaS/Web based customer facing products, leading engineering teams as software/technical architect Experiences of engineering practices such as code refactoring, microservices, design and enterprise integration patterns, test and design-driven development, continuous integration, building highly scalable applications, application and infrastructure security Strong cloud infrastructure experience with AWS and/or Azure Experience building event driven systems and working with message queues/topics Broad working experience across multiple programming languages and frameworks with in-depth experience in one or more of the following: .Net, Java, Scala or Go-lang Hands-on experience with relational databases like SQL Server, PostgreSQL and document stores like Elasticsearch or MongoDB Hands-on experience with Big Data processing technologies like Hadoop/Spark is a plus Hands-on experience with container technologies like Docker, Kubernetes Knowledge of Agile software development process

Job posted by
apply for job
apply for job
Jayaraj E picture
Jayaraj E
Job posted by
Jayaraj E picture
Jayaraj E
Apply for job
apply for job

Application Developer/Product Developers

Founded 1995
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Specification: Location: Bangalore Designation: Senior Engineer/Tech Lead (Designation will be decided based up on the skills, CTC etc)   Qualification:  Bachelor's or master's degree in Computer Science or equivalent area 6-12 years of experience in software development building complex enterprise systems that involve large scale data processing Must have very good experience in any of the following languages such as Java, Scala, C# Hands-on experience with databases like SQL Server, PostgreSQL or similar is required Knowledge of document stores like Elasticsearch or MongoDB is desirable Hands-on experience with Big Data processing technologies like Hadoop/Spark is required Strong cloud infrastructure experience with AWS and / or Azure Experience with container technologies like Docker, Kubernetes Experiences of engineering practices such as code refactoring, design patterns, design driven development, continuous integration, building highly scalable applications, application security Knowledge of Agile software development process   What youll do: As a Sr. Engineer or Technical Lead, you will be involved in leading software development projects in a hands-on manner. You will spend about 70% of your time writing and reviewing code, creating software designs. Your expertise will expand into database design, core middle tier modules, performance tuning, cloud technologies, DevOps and continuous delivery domains.

Job posted by
apply for job
apply for job
Raji Arun picture
Raji Arun
Job posted by
Raji Arun picture
Raji Arun
Apply for job
apply for job

Senior ETL Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools. Experience with Data Management & data warehouse development Star schemas, Data Vaults, RDBMS, and ODS Change Data capture Slowly changing dimensions Data governance Data quality Partitioning and tuning Data Stewardship Survivorship Fuzzy Matching Concurrency Vertical and horizontal scaling ELT, ETL Spark, Hadoop, MPP, RDBMS Experience with Dev/OPS architecture, implementation and operation Hand's on working knowledge of Unix/Linux Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue. Complex ETL program design coding Experience in Shell Scripting, Batch Scripting. Good communication (oral & written) and inter-personal skills Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval. Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery. Propose good design & solutions and adherence to the best Design & Standard practices. Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks. Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques. Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies. Work with functional business analysts to ensure that application programs are functioning as defined.  Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence. Technologies (Select based on requirement) Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory Utilities for bulk loading and extracting Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala J/ODBC, JSON Data Virtualization Data services development Service Delivery - REST, Web Services Data Virtualization Delivery – Denodo   ELT, ETL Cloud certification Azure Complex SQL Queries   Data Ingestion, Data Modeling (Domain), Consumption(RDMS)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Hadoop Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

1. Design and development of data ingestion pipelines.2. Perform data migration and conversion activities.3. Develop and integrate software applications using suitable developmentmethodologies and standards, applying standard architectural patterns, takinginto account critical performance characteristics and security measures.4. Collaborate with Business Analysts, Architects and Senior Developers toestablish the physical application framework (e.g. libraries, modules, executionenvironments).5. Perform end to end automation of ETL process for various datasets that arebeing ingested into the big data platform.