Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io

ETL Jobs in Bangalore (Bengaluru)

Explore top ETL Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Hyderabad
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Desired Skill, Experience, Qualifications, and Certifications:• 5+ years’ experience developing and maintaining modern ingestion pipeline using technologies like Spark, Apache Nifi etc).• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility, • Claims, Clinical)• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift & • Jupyter Notebooks• Strong in Spark Scala & Python pipelines (ETL & Streaming)• Strong experience in metadata management tools like AWS Glue• String experience in coding with languages like Java, Python• Worked on designing ETL & streaming pipelines in Spark Scala / Python• Good experience in Requirements gathering, Design & Development• Working with cross-functional teams to meet strategic goals.• Experience in high volume data environments• Critical thinking and excellent verbal and written communication skills• Strong problem-solving and analytical abilities, should be able to work and delivery individually• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache Airflow or similar schedulers experience• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837 • Good communication skills

Job posted by
apply for job
apply for job
geeti gaurav mohanty picture
geeti gaurav mohanty
Job posted by
geeti gaurav mohanty picture
geeti gaurav mohanty
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2400000, duration: "undefined", currency: "INR", equity: false})}}

About This RoleAs a data engineer, your primary focus will be designing data integration modules on bigdata technologies, developing new features, functionality for our data pipelines,implementing testing and managing data workloads within our standard continuousdevelopment and continuous delivery processes in the cloud.What You’ll Do● Deliver plugins for our Python-based ETL pipelines.● Deliver Python microservices for provisioning and managing cloud infrastructure.● Implement algorithms to analyze large data sets.● Draft design documents that translate requirements into code.● Deal with challenges associated with handling large volumes of data.● Assume responsibilities from technical design through technical client support.● Manage expectations with internal stakeholders and context-switch in a fast-pacedenvironment. Thrive in an environment that uses AWS and Elasticsearchextensively.● Keep abreast of technology and contribute to the engineering strategy.● Champion best development practices and provide mentorship.What We’re Looking For● Experience in○ Python 3○ Python libraries used for data (Pandas, Numpy)○ AWS○ Elasticsearch (preferred not mandatory)○ Performance tuning○ Object-Oriented Design and Modelling○ Delivering complex software○ CI/CD tools (Gitlab, Github, Jenkins, Circle..)● Knowledge of design patterns.● Sharp analytical and problem-solving skills.● Strong sense of ownership.● Demonstrable desire to learn and grow.● Excellent written and oral communication skills.● Mature collaboration and mentoring abilities.

Job posted by
apply for job
apply for job
Anusha S picture
Anusha S
Job posted by
Anusha S picture
Anusha S
Apply for job
apply for job

Tableau Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
6 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Hands-on development/maintenance experience in Tableau: Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions using Tableau Reviewing and improving existing Tableau dashboards and data models/ systems and collaborating with teams to integrate new systems Provide support and expertise to the business community to assist with better utilization of Tableau Understand business requirements, conduct analysis and recommend solution options for intelligent dashboards in Tableau Experience with Data Extraction, Transformation and Load (ETL) – knowledge of how to extract, transform and load data Execute SQL data queries across multiple data sources in support of business intelligence reporting needs. Format query results / reports in various ways Participates in QA testing, liaising with other project team members and being responsive to client's needs, all with an eye on details in a fast-paced environment Performing and documenting data analysis, data validation, and data mapping/design     Key Performance Indicators (Indicate how performance will be measured: indicators, activities…) KPIs will be outlined in detail in the goal sheet     Ideal Background (State the minimum and desirable education and experience level)   Education Minimum:  Graduation, preferably in Science Experience requirement:                                       ·        Minimum: 2-3 years’ relevant work experience in the field of reporting and data analytics using Tableau. ·        Tableau certifications would be preferred ·        Work experience in the regulated medical device / Pharmaceutical industry would be an added advantage, but not mandatory Languages: Minimum: English (written and spoken)       Specific Professional Competencies: Indicate any other soft/technical/professional knowledge and skills requirements   Extensive experience in developing, maintaining and managing Tableau driven dashboards & analytics and working knowledge of Tableau administration /architecture. A solid understanding of SQL, rational databases, and normalization Proficiency in use of query and reporting analysis tools Competency in Excel (macros, pivot tables, etc.) Degree in Mathematics, Computer Science, Information Systems, or related field.

Job posted by
apply for job
apply for job
Hepsibah W picture
Hepsibah W
Job posted by
Hepsibah W picture
Hepsibah W
Apply for job
apply for job

Tableau Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 900000, duration: "undefined", currency: "INR", equity: false})}}

Hands-on development/maintenance experience in Tableau: Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions using Tableau Reviewing and improving existing Tableau dashboards and data models/ systems and collaborating with teams to integrate new systems Provide support and expertise to the business community to assist with better utilization of Tableau Understand business requirements, conduct analysis and recommend solution options for intelligent dashboards in Tableau Experience with Data Extraction, Transformation and Load (ETL) – knowledge of how to extract, transform and load data Execute SQL data queries across multiple data sources in support of business intelligence reporting needs. Format query results / reports in various ways Participates in QA testing, liaising with other project team members and being responsive to client's needs, all with an eye on details in a fast-paced environment Performing and documenting data analysis, data validation, and data mapping/design     Key Performance Indicators (Indicate how performance will be measured: indicators, activities…) KPIs will be outlined in detail in the goal sheet     Ideal Background (State the minimum and desirable education and experience level)   Education Minimum:  Graduation, preferably in Science Experience requirement:                                       ·        Minimum: 2-3 years’ relevant work experience in the field of reporting and data analytics using Tableau. ·        Tableau certifications would be preferred ·        Work experience in the regulated medical device / Pharmaceutical industry would be an added advantage, but not mandatory Languages: Minimum: English (written and spoken)       Specific Professional Competencies: Indicate any other soft/technical/professional knowledge and skills requirements   Extensive experience in developing, maintaining and managing Tableau driven dashboards & analytics and working knowledge of Tableau administration /architecture. A solid understanding of SQL, rational databases, and normalization Proficiency in use of query and reporting analysis tools Competency in Excel (macros, pivot tables, etc.) Degree in Mathematics, Computer Science, Information Systems, or related field.

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Mainframes Developer

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad, Bengaluru (Bangalore), Pune, Chennai, Noida, NCR (Delhi | Gurgaon | Noida)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Job Title:Mainframes DeveloperWork Location:Anywhere in IndiaExpereience Level:4-8 YearsNotice Period:Immediate JoinersIts a Full Time Opportunity with Our ClientPackage:As per market standardsMandatory Skills:COBOL,DB2 & ETLRoles & Responsibilities:--Experienced with building/testing of complex, multi-platform, distributed applications--Knowledge of the major tools in a toolkit for the specific application product – IBM Zseries mainframe, COBOL, CICS, JCL, DB2, VSAM--Should have knowledge in databases using SQL.--Strong knowledge on mainframe CICS and DB2 is desired.

Job posted by
apply for job
apply for job
Venkat B picture
Venkat B
Job posted by
Venkat B picture
Venkat B
Apply for job
apply for job

Informatica Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Informatica PowerCenter (9x ,10.2) : Minimum 2+ years’ experience SQL / PLSQL: Understanding of SQL procedure. Able to convert procedures into Informatica mapping. Optional: Advantage if anyone has knowledge of Windows Batch Sript.

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Senior Data Engineer

Founded 2019
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Chennai, Bengaluru (Bangalore)
Experience icon
4 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3500000, duration: "undefined", currency: "INR", equity: false})}}

In this role you'll get. Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns Write efficient and quality code that can scale Adopt Bookr quality standards, recommend process standards and best practices Research, learn & adapt new technologies to solve problems & improve existing solutions Contribute to engineering excellence backlog Identify performance issues Effective code and design reviews Improve reliability of overall production system by proactively identifying patterns of failure Leading and mentoring junior engineers by example End-to-end ownership of stories (including design, serviceability, performance, failure handling) Strive hard to provide the best experience to anyone using our products Conceptualise innovative and elegant solutions to solve challenging big data problems Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products Adhere to company policies, procedures, mission, values, and standards of ethics and integrity   On day one we'll expect you to. B. E/B. Tech from a reputed institution Minimum 5 years of software development experience and at least a year experience in leading/guiding people Expert coding skills in Python/PySpark or Java/Scala Deep understanding in Big Data Ecosystem - Hadoop and Spark Must have project experience with Spark Ability to independently troubleshoot Spark jobs Good understanding of distributed systems Fast learner and quickly adapt to new technologies Prefer individuals with high ownership and commitment Expert hands on experience with RDBMS Fast learner and quickly adapt to new technologies Prefer individuals with high ownership and commitment Ability to work independently as well as working collaboratively in a team   Added bonuses you have. Hands on experience with EMR/Glue/Data bricks Hand on experience with Airflow Hands on experience with AWS Big Data ecosystem   We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.     Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused. Unlimited PTOWe want you to feel free to recharge your batteries when you need it! Stock Options - Opportunity to participate in Company stock plan Flat hierarchy - Team leaders at your fingertips BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty! Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team

Job posted by
apply for job
apply for job
Nimish Mehta picture
Nimish Mehta
Job posted by
Nimish Mehta picture
Nimish Mehta
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Qrata
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 2800000, max: 5600000, duration: "undefined", currency: "INR", equity: false})}}

5+ years of experience in a Data Engineer role. Proficiency in Linux. Must have SQL knowledge and experience working with relational databases,query authoring (SQL) as well as familiarity with databases including Mysql,Mongo, Cassandra, and Athena. Must have experience with Python/Scala.Must have experience with Big Data technologies like Apache Spark. Must have experience with Apache Airflow. Experience with data pipeline and ETL tools like AWS Glue. Experience working with AWS cloud services: EC2, S3, RDS, Redshift.

Job posted by
apply for job
apply for job
Blessy Fernandes picture
Blessy Fernandes
Job posted by
Blessy Fernandes picture
Blessy Fernandes
Apply for job
apply for job

Product Analyst

Founded 2019
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Our Story Animall was first conceptualized in June 2019 as a part of an internal hackathon in Pratilipi (a story-telling platform )where Animall won both the jury and audience award. After talking to hundreds of dairy farmers, we started working on Animall as a weekend project in Aug 2019 and started seeing very strong user love and traction. During this time we talked to even more dairy farmers(almost thousands of them), solved some of their problems in non-scalable ways, and finally decided to take on this large, untouched, and very interesting market full-time in Nov 2019. Till now, Animall has primarily focused on building a peer-to-peer cattle trading platform and the journey of Animall has been very encouraging and promising- we have reached more than 50 lakh dairy farmers within a short span of 1.5 years. More than 300,000 cattle have been sold through Animall that amounts to Rs 1500Cr of GTV, at a monthly run rate of Rs 320Cr. Our dairy farmers have rated us 4.6 out of 5 and 60% of them refer Animall to at least one friend every month. Responsibilities include, but are not limited to: Partnering with PMs to define the requirements for the Animall product and data workflows and tooling; you will be a thought and execution partner. Establishing product metrics, constructing operational and analytical dashboards, and being responsible for certain KPI's; you will help our team and organization be more data-driven. Generation of insights to define and refine product opportunities and business cases; you will help us make more impactful decisions. Engaging with and building alignment across a diverse set of stakeholders including Business Development, Product, Data, and Engineering you will be a recognized expert in your domain. Qualifications: Demonstrated data fluency; you’re comfortable with large datasets, constructing your own queries, designing data visualizations, and extracting meaning from your analysis. Strong product and strategy mindset; you’re perpetually asking the “why?” Experience working and establishing credibility with colleagues spanning Product, Engineering, Data, Finance, and Business Development; you have strong written and verbal communication skills Ability to dig into the details where needed without losing sight of the bigger picture Exceptionally curious and excited by making sense of complexity Strong knowledge of and experience with BI tools (PowerBI, Tableau, Metabase, Google Data Studio), Databases (SQL) and programming (construct ETL pipelines, Python and/or Javascript) 2+ years of relevant work experience as an analyst, product manager, data scientist, or management consultant

Job posted by
apply for job
apply for job
Saif Naik picture
Saif Naik
Job posted by
Saif Naik picture
Saif Naik
Apply for job
apply for job

Senior Developer - Spark

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[1 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 4000000, duration: "undefined", currency: "INR", equity: false})}}

Owns the end to end implementation of the assigned data processing components/product features  i.e. design, development, deployment, and testing of the data processing components and associated flows conforming to best coding practices   Creation and optimization of data engineering pipelines for analytics projects.   Support data and cloud transformation initiatives   Contribute to our cloud strategy based on prior experience   Independently work with all stakeholders across the organization to deliver enhanced functionalities  Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting   Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data   Be able to comprehend the application of database index and transactions   Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis, design and programming skills, and design patterns   Implement ETL workflows for data matching, data cleansing, data integration, and management   Maintain existing data pipelines, and develop new data pipeline using big data technologies   Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform

Job posted by
apply for job
apply for job
Tanisha Takkar picture
Tanisha Takkar
Job posted by
Tanisha Takkar picture
Tanisha Takkar
Apply for job
apply for job

Data Warehousing Engineer - Big Data/ETL

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
3 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Must Have Skills:- Solid Knowledge on DWH, ETL and Big Data Concepts- Excellent SQL Skills (With knowledge of SQL Analytics Functions)- Working Experience on any ETL tool i.e. SSIS / Informatica- Working Experience on any Azure or AWS Big Data Tools.- Experience on Implementing Data Jobs (Batch / Real time Streaming)- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologiesPreferred Skills:- Experience on Py-Spark / Spark SQL- AWS Data Tools (AWS Glue, AWS Athena)- Azure Data Tools (Azure Databricks, Azure Data Factory)Other Skills:- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search- Knowledge on domain/function (across pricing, promotions and assortment).- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),- Knowledge on DQS and MDM.Key Responsibilities:- Independently work on ETL / DWH / Big data Projects- Gather and process raw data at scale.- Design and develop data applications using selected tools and frameworks as required and requested.- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.- Work closely with the engineering team to integrate your work into our production systems.- Process unstructured data into a form suitable for analysis.- Analyse processed data.- Support business decisions with ad hoc analysis as needed.- Monitoring data performance and modifying infrastructure as needed.Responsibility: Smart Resource, having excellent communication skills

Job posted by
apply for job
apply for job
Vishal Sharma picture
Vishal Sharma
Job posted by
Vishal Sharma picture
Vishal Sharma
Apply for job
apply for job

MicroStrategy Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Hyderabad, Mumbai, Bengaluru (Bangalore)
Experience icon
6 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

5 years of experience in Business Intelligence development. Experience with MicroStrategy toolset: Desktop, Report Services, Architect, OLAP, Administrator Strong experience in design, creation, and deployment of reports and dashboards Experience with designing reusable MicroStrategy components for business reporting Excellent communication skills to interact with users at various levels within the organization. Strong SQL skills to perform queries, data/file validation, analysis, profiling, etc, as needed Creating and maintaining documentation is a plus Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals. Experience with ETL and Collibra is a plus Previous experience in the banking industry is preferred

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Software Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Pune, Hyderabad, Mumbai, Bengaluru (Bangalore)
Experience icon
5 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1900000, duration: "undefined", currency: "INR", equity: false})}}

JD 5 years of experience in Business Intelligence development. Experience with MicroStrategy toolset: Desktop, Report Services, Architect, OLAP, Administrator Strong experience in design, creation, and deployment of reports and dashboards Experience with designing reusable MicroStrategy components for business reporting Excellent communication skills to interact with users at various levels within the organization. Strong SQL skills to perform queries, data/file validation, analysis, profiling, etc, as needed Creating and maintaining documentation is a plus Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals. Experience with ETL and Collibra is a plus Previous experience in the banking industry is preferred

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Azure Architect

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

1) 6-9 years of industry experience and at least 4 years of experience in an architect role is required, along with at least 3-5 year experience in designing and building analytics/data solutions in Azure. 2) Demonstrated in-depth skills with Azure Data Factory(ADF),Azure SQL Server, Azure Synapse, ADLS with the ability to configure and administrate all aspects of Azure SQL Server. 3) Demonstrated experience delivering multiple data solutions as an architect. 4) Demonstrated experience with ETL development both on-premises and in the cloud using SSIS, Data Factory, and related Microsoft and other ETL technologies 5) DP-200 and DP-201 certifications preferred. 6) Good to have hands on experience in Power BI and Azure Databricks. 7)Should have good communication and presentation skills.

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Data Engineer

Founded 2013
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java. The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java. The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages. The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.

Job posted by
apply for job
apply for job
Ragul Ragul picture
Ragul Ragul
Job posted by
Ragul Ragul picture
Ragul Ragul
Apply for job
apply for job

Data Engineer - AWS/ETL/Spark

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.We are looking for talented and driven Data Engineers at various levels to work with customers and data scientists to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.Required Qualifications :- 3-5 years of experience of developing and managing streaming and batch data pipelines- Experience in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc- Familiarity with cloud-based platforms like AWS, Azure or GCP- Experience with cloud data warehouses like Redshift and Snowflake- Proficient in writing complex SQL queries.- Experience working with structured and semi-structured data formats like CSV, JSON and XML- Desire to learn about, explore and invent new tools for solving real-world problems using dataDesired Qualifications :- Cloud computing experience, Amazon Web Services (AWS)- Prior experience in Data Warehousing concepts, multi-dimensional data models- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

Job posted by
apply for job
apply for job
Gowshini Maheswaran picture
Gowshini Maheswaran
Job posted by
Gowshini Maheswaran picture
Gowshini Maheswaran
Apply for job
apply for job

QA-Sr. Software Engineer

Founded 2013
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Strong QA experience in web/cloud applications, Database (preferably MySQL) and ETLs Experience in Unix/Linux testing Experience in working in scrum teams. Experience in preparing test approach and plans, test scenarios, and other QA-related artifacts mandatory. Good communication and knowledge management/documentation skills Secondary Strong knowledge/QA experience with API Testing and Salesforce testing. QA experience with Informatica. Exposure to JIRA Experience/ Exposure to any automation tool like Selenium, JMeter Responsibilities Own and manage deployment and releases independently in a QA environment from an end-to-end testing perspective. Coordinate with various teams to understand scope, requirements and have a point of view from the end-user perspective in defect triage Provide visibility of risks and test progress to all relevant stakeholders Required experience and education 3-5 years of relevant experience in manual and automated testing B.E/B. Tech computer science preferably or equivalent experience.

Job posted by
apply for job
apply for job
Santhosh Kumar KR picture
Santhosh Kumar KR
Job posted by
Santhosh Kumar KR picture
Santhosh Kumar KR
Apply for job
apply for job

Sr Data Engineer - (Python, Pandas)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 20 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3500000, duration: "undefined", currency: "INR", equity: false})}}

What you’ll do Deliver plugins for our Python-based ETL pipelines. Deliver Python microservices for provisioning and managing cloud infrastructure. Implement algorithms to analyse large data sets. Draft design documents that translate requirements into code. Deal with challenges associated with handling large volumes of data. Assume responsibilities from technical design through technical client support. Manage expectations with internal stakeholders and context-switch in a fast paced environment. Thrive in an environment that uses AWS and Elasticsearch extensively. Keep abreast of technology and contribute to the engineering strategy. Champion best development practices and provide mentorship. What we’re looking for Experience in Python 3. Python libraries used for data (such as pandas, numpy). AWS. Elasticsearch. Performance tuning. Object Oriented Design and Modelling. Delivering complex software, ideally in a FinTech setting. CI/CD tools. Knowledge of design patterns. Sharp analytical and problem-solving skills. Strong sense of ownership. Demonstrable desire to learn and grow. Excellent written and oral communication skills. Mature collaboration and mentoring abilities. About SteelEye Culture Work from home until you are vaccinated against COVID-19 Top of the line health insurance • Order discounted meals every day from a dedicated portal Fair and simple salary structure 30+ holidays in a year Fresh fruits every day Centrally located. 5 mins to the nearest metro station (MG Road) Measured on output and not input

Job posted by
apply for job
apply for job
Arjun Shivraj picture
Arjun Shivraj
Job posted by
Arjun Shivraj picture
Arjun Shivraj
Apply for job
apply for job

Sr Platform Engineer- (Python, ETL)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 15 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3500000, duration: "undefined", currency: "INR", equity: false})}}

About SteelEyeSteelEye is a fast growing FinTech company based in London and has offices in Bangalore and Paris, that offers a data platform to help financial institutions such as Investment Banks, Hedge Funds, Brokerage Firms, Asset Management Firms to comply with financial regulations in the European Union. Our clients can aggregate, search, surveillance and report on trade, communications and market data. SteelEye also enables customers to gain powerful insights from their data, helping them to trade with greater efficiency and profitability. The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses. We are looking to hire a seasoned SRE to join us as we start on our next phase of growth. We have a culture of openness, collaboration, and the passion to get things done whilst appreciating the importance of a good work life balance. Being part of a start-up can be equally exciting as it is challenging. You will be part of the SteelEye team not just because of your talent but also because of your entrepreneurial flare which we thrive on at SteelEye. This means we want you to be curious, contribute, ask questions and share ideas. We encourage you to get involved in helping shape our business.   What you’ll do Deliver plugins for our Python-based ETL pipelines. Deliver Python microservices for provisioning and managing cloud infrastructure. Implement algorithms to analyse large data sets. Draft design documents that translate requirements into code. Deal with challenges associated with handling large volumes of data. Assume responsibilities from technical design through technical client support. Manage expectations with internal stakeholders and context-switch in a fast paced environment. Thrive in an environment that uses AWS and Elasticsearch extensively. Keep abreast of technology and contribute to the engineering strategy. Champion best development practices and provide mentorship.   What we’re looking for Experience in o Python 3. o Python libraries used for data (such as pandas, numpy). o AWS. o Elasticsearch. o Performance tuning. o Object Oriented Design and Modelling. o Delivering complex software, ideally in a FinTech setting. o CI/CD tools. Knowledge of design patterns. Sharp analytical and problem-solving skills. Strong sense of ownership. Demonstrable desire to learn and grow. Excellent written and oral communication skills. Mature collaboration and mentoring abilities. About SteelEye Culture Work from home until you are vaccinated against COVID-19 Top of the line health insurance • Order discounted meals every day from a dedicated portal Fair and simple salary structure 30+ holidays in a year Fresh fruits every day Centrally located. 5 mins to the nearest metro station (MG Road) Measured on output and not input

Job posted by
apply for job
apply for job
Arjun Shivraj picture
Arjun Shivraj
Job posted by
Arjun Shivraj picture
Arjun Shivraj
Apply for job
apply for job

ETL Architect

Founded 1995
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Chennai, Bengaluru (Bangalore), Hyderabad, Pune, Mumbai, NCR (Delhi | Gurgaon | Noida), Kolkata
Experience icon
10 - 18 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Key skills:Informatica PowerCenter, Informatica Change Data Capture, Azure SQL, Azure Data Lake Job Description Minimum of 15 years of Experience with Informatica ETL, Database technologies Experience with Azure database technologies including Azure SQL Server, Azure Data Lake Exposure to Change data capture technology Lead and guide development of an Informatica based ETL architecture. Develop solution in highly demanding environment and provide hands on guidance to other team members. Head complex ETL requirements and design. Implement an Informatica based ETL solution fulfilling stringent performance requirements. Collaborate with product development teams and senior designers to develop architectural requirements for the requirements. Assess requirements for completeness and accuracy. Determine if requirements are actionable for ETL team. Conduct impact assessment and determine size of effort based on requirements. Develop full SDLC project plans to implement ETL solution and identify resource requirements. Perform as active, leading role in shaping and enhancing overall ETL Informatica architecture and Identify, recommend and implement ETL process and architecture improvements. Assist and verify design of solution and production of all design phase deliverables. Manage build phase and quality assure code to ensure fulfilling requirements and adhering to ETL architecture.

Job posted by
apply for job
apply for job
Jayaraj E picture
Jayaraj E
Job posted by
Jayaraj E picture
Jayaraj E
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1600000, duration: "undefined", currency: "INR", equity: false})}}

Job Description Job Title: Data Engineer Tech Job Family: DACI • Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) • 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering • 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Preferred Qualifications: • Master's Degree in Computer Science, CIS, or related field • 2 years of IT experience developing and implementing business systems within an organization • 4 years of experience working with defect or incident tracking software • 4 years of experience with technical documentation in a software development environment • 2 years of experience working with an IT Infrastructure Library (ITIL) framework • 2 years of experience leading teams, with or without direct reports • Experience with application and integration middleware • Experience with database technologies Data Engineering • 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role) • Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role) BI Engineering • Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role) Platform Engineering • 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role) • Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role) Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Power Bi
Power Bi
at Nu-Pieat Nu-Pie

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Salary icon
Best in industry{{renderSalaryString({min: 300000, max: 500000, duration: "undefined", currency: "INR", equity: false})}}

Required Skills and Experience• • General or Strong IT background, with at least 2 to 4 years of working experience• o Strong understanding of data integration and ETL methodologies.• o Demonstrated ability to multi-task• o Excellent English communication skills• o A desire to be a part of growing company. You'll have 2 core responsibilities (Client Work, and Company Building), and we expect dedication to both.• o Willingness to learn and work on new technologies.• o Should be a quick and self-learner.Tools: 1. Good Knowledge of Power Bi and Tableau2. Good experience in handling data in Excel.

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 600000, duration: "undefined", currency: "INR", equity: false})}}

The Data Engineer is at the core of our customer’s success. We are looking for talented andexperienced engineers to join our team. If you are a problem solver and enjoy working on an excitingand fast-paced environment, this may be the perfect opportunity for you to join us and take yourcareer to the next level.Responsibilities• • Communicate effectively with customers, including expectations for callbacks and followupon their issues.• • Handle technical issues of different complexity and help other members of the team.• • Collaborate with other support teams, Engineering and other internal departments to helpresolve critical issues.• • Cross train on multiple technologies to effectively build and support product/technologyportfolio.• • Troubleshoot, diagnose & resolve customer issues independently, making use of theresources available to you.• • Keep all the ongoing cases documented and up to date in the case management system.• • Promote and maintain a high quality, professional, service orientated DigiTop imageamongst internal and external customers.• • Works in adherence to defined processes and procedures implemented in the organization• • Maintain and continually upgrade technical understanding of products and technologies.• • Build and maintain solid working relationships with the members of the team.• • Write and/or edit knowledge articles for every issue resolved.Required Skills and Experience• • General o Strong IT background, with at least 2 to 4 years of working experience• o Strong understanding of data integration and ETL methodologies.• o Degree in Computer Science or equivalent experience.• o Demonstrated ability to multi-task• o Excellent English communication skills• o A desire to be a part of growing company. You'll have 2 core responsibilities (Client Work,and Company Building), and we expect dedication to both.• o Willingness to learn and work on new technologies.• o Should be a quick and self-learner.• • Technical o Extensive experience using ETL methodology for supporting Data Extraction,Transformations, Data validation and Loading.• o Experience in Database skills (load data from Excel/CSV/Text files to staging, write sqlstatements/functions/ stored procedures, run SSIS packages).• o Knowledge of integrating with different data sources such as SQL, Web Services, API’s,text files and CSV is mandatory• o Extensive experience troubleshooting and solving complextechnical problems.• o Experience in at-least one of the ETL Tools (SSIS).• o Knowledge of other ETL Tools such as Snaplogic, IBM Data Stage, Mulesoft andInformatica is a plus.• o Knowledge of different/diversified technologies will be added advantage. Additionalweightage will be given.• o Experience with Visualization tools Power BI, Qlik Sense andTableau is preferred

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Data Engineer

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 1100000, duration: "undefined", currency: "INR", equity: false})}}

This position is responsible for developing sustainable scripts to perform data - extraction, manipulation,processing and visualization. We are looking for an individual with an analytical mindset and aneagerness to learn to join our analytics team.You will be responsible for● Building data and intelligence collateral for Sales, Marketing and Customer Success teams.● Working with the Business, Operations and Tech teams to understand and derive dataRequirements● Conducting exploratory data analysis to identify patterns in data that eventually serve as an inputto product.● Identifying data trends, presenting the findings and making recommendations.● Building ad hoc reports and/or other requested information and meeting deadlines for customerdelivery● Owning the quality and accuracy of data reports.● Building automation and tooling to proactively surface data issuesExperience and skills we are looking for● 3-5 years of experience in a data engineer role● Good working knowledge of MySQL/PostgreSQL/Redshift and understanding of data modellingand relational databases● Strong hands-on experience working in Python/R for data analysis and writing highly performantand advanced SQL queries● Demonstrated experience in timely creation of both standard and ad hoc reporting, data analysisand presentation of findingsPayments Made Smart● Proficiency with Word, Excel and PowerPoint, with advanced skills in Excel (Pivot Tables, V-lookups, and Macros)● Experience with Mode Analytics is good to have.● Experience with Dashboard and reporting toolsWhat will make you successful in this role• Strong communication and collaboration skills with ability to work with technology and businesspartners• Desire to learn emerging technologies• Identifying a customer’s needs before they ask

Job posted by
apply for job
apply for job
Sana Kausar picture
Sana Kausar
Job posted by
Sana Kausar picture
Sana Kausar
Apply for job
apply for job

Senior ETL Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools. Experience with Data Management & data warehouse development Star schemas, Data Vaults, RDBMS, and ODS Change Data capture Slowly changing dimensions Data governance Data quality Partitioning and tuning Data Stewardship Survivorship Fuzzy Matching Concurrency Vertical and horizontal scaling ELT, ETL Spark, Hadoop, MPP, RDBMS Experience with Dev/OPS architecture, implementation and operation Hand's on working knowledge of Unix/Linux Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue. Complex ETL program design coding Experience in Shell Scripting, Batch Scripting. Good communication (oral & written) and inter-personal skills Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval. Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery. Propose good design & solutions and adherence to the best Design & Standard practices. Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks. Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques. Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies. Work with functional business analysts to ensure that application programs are functioning as defined.  Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence. Technologies (Select based on requirement) Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory Utilities for bulk loading and extracting Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala J/ODBC, JSON Data Virtualization Data services development Service Delivery - REST, Web Services Data Virtualization Delivery – Denodo   ELT, ETL Cloud certification Azure Complex SQL Queries   Data Ingestion, Data Modeling (Domain), Consumption(RDMS)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Data Engineer

Founded 2011
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2200000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities: Building data and intelligence collateral for Sales, Marketing and Customer Success teams. Working with the Business, Operations and Tech teams to understand and derive data Conducting exploratory data analysis to identify patterns in data that eventually serve as an input to product. Identifying data trends, presenting the findings and making recommendations. Building ad hoc reports and/or other requested information and meeting deadlines for customer delivery Owning the quality and accuracy of data reports. Building automation and tooling to proactively surface data issues   Requirements: 5-8 years of experience in a data engineer role Good working knowledge of MySQL/PostgreSQL/Redshift and understanding of data modelling and relational databases Strong hands-on experience working in Python/R for data analysis and writing highly performant and advanced SQL queries Demonstrated experience in timely creation of both standard and ad hoc reporting, data analysis and presentation of findings Proficiency with Word, Excel and PowerPoint, with advanced skills in Excel (Pivot Tables, Vlookups, and Macros) Experience with Mode Analytics is good to have. Experience with Dashboard and reporting tools Strong communication and collaboration skills with ability to work with technology and business partners Desire to learn emerging technologies Identifying a customer's needs before they ask Excellent presentation skills

Job posted by
apply for job
apply for job
Hemant Surdeo picture
Hemant Surdeo
Job posted by
Hemant Surdeo picture
Hemant Surdeo
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
via slice
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

About slice slice is a fintech startup focused on India’s young population. We aim to build a smart, simple, and transparent platform to redesign the financial experience for millennials and bring success and happiness to people’s lives. Growing with the new generation is what we dream about and all that we want. We believe that personalization combined with an extreme focus on superior customer service is the key to build long-lasting relations with young people. About team/role In this role, you will have the opportunity to create a significant impact on our business & most importantly our customers through your technical expertise on data as we take on challenges that can reshape the financial experience for the next generation. If you are a highly motivated team player with a knack for problem solving through technology, then we have a perfect job for you. What you’ll do Work closely with Engineering and Analytics teams to assist in Schema Designing, Normalization of Databases, Query optimization etc. Work with AWS cloud services: S3, EMR, Glue, RDS Create new and improve existing infrastructure for ETL workflows from a wide variety of data sources using SQL, NoSQL and AWS big data technologies Manage and monitor performance, capacity and security of database systems and regularly perform server tuning and maintenance activities Debug and troubleshoot database errors Identify, design and implement internal process improvements; optimising data delivery, re-designing infrastructure for greater scalability, data archival Qualification: 2+ years experience working as a Data Engineer Experience with a scripting language -  PYTHON preferably Experience with Spark and Hadoop technologies. Experience with AWS big data tools is a plus. Experience with SQL and NoSQL databases technologies like Redshift, MongoDB, Postgres/MySQL, bigQuery, Casandra. Experience on Graph DB (Neo4j and OrientDB) and Search DB (Elastic Search) is a plus. Experience in handling ETL JOBS

Job posted by
apply for job
apply for job
Gunjan Sheth picture
Gunjan Sheth
Job posted by
Gunjan Sheth picture
Gunjan Sheth
Apply for job
apply for job

Data Architect

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 18 years
Salary icon
Best in industry{{renderSalaryString({min: 1400000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Coordinate and provide experience-based solution for teams deploying business intelligence, data platform, and big data solutions. Strong knowledge of data warehousing and big data / analytics platform solutions and how data architecture fits into larger data warehousing and database implementation projects as a major component of the effort. Ability to guide C-level executives/ chief architects at major clients (Fortune 500) in data architecture decisions. Experience mapping out enterprise architecture transformations over a 3 year period, and leading those implementations. Develop, implement and maintain data architecture best practices and standards Utilizing data architecture best practices and standards, define and implement technical solutions in the movement of data throughout an organization Provide leadership in related technical areas of importance such as Business Intelligence Reporting and Analytics Gather requirements for data architecture through the use of business and technical interviews and round-table discussions Evaluate and make decisions regarding the alternative processes that can be followed in data movement throughout an organization: ETL, SOA / Web Services, Bulk Load, Evaluate and make decisions regarding the alternative tools and platforms that can be used to perform activities around data collection, data distribution and reporting Show experience with the concepts of data modeling for both transaction based systems and reporting / data warehouse type systems Evaluate data related requirements around data quality and master data management and understand and articulate how these factors apply to data architecture Understand the concepts of data quality, data ownership, and data governance, and understand how they apply within a data architecture framework 15+ years experience in IT, 10+ years experience with data related positions and responsibilities Excellent knowledge of multiple toolsets: ETL tools, reporting tools, data quality, metadata management, multiple database management systems, cloud, security, MDM tools.  (Anything Insights & Data service line may support in future.) Bachelors degree or equivalent in Computer Science, Information Systems or related field Experience in architecting, designing, developing and implementing project work within highly-visible data-driven applications in very large data warehousing / data repository environments with complex processing requirements A proven track record in system design and performance Demonstrated experience integrating systems in multi-user, multi-platform, multitasking operating systems environments Working knowledge of relational databases such as Oracle, DB2, SQL Server, etc. Ability to advocate ideas and to objectively participate in design critique Ideally the candidate should also have: Superb team building skills with a predisposition to building consensus and achieving goals through collaboration rather than direct line authority A positive, results oriented style, evidenced by listening, motivating, delegating, influencing, and monitoring the work being done Strong interpersonal/communication skills with the professional staff, senior level executives, and the business community at large - Experience delivering enterprise architecture for data & analytics for Fortune 500 companies - Ability to lead client architect leadership (CIO, Chief Architect) - Broad understanding of data platforms and tools (Cloud Platforms, Infra, Security, Data Movement, Data Engineering, Visualization, MDM, DQ, Lineage) and proven experience deploying architectures for the largest clients globally - Strong communication and facilitation skills (ability to manage workshops with 20-30 client technical resources) Ability to interface with CIO Train client on cloud, Enterprise Data Platform, Capgemini POV for Data (Business DL, Perform AI, Factory Model etc.)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Senior Software Engineer - BODS / ETL

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industryBest in industry

EDUCATION AND YEARS OF EXPERIENCE REQUIREMENTS:   Bachelor’s Degree in Computer Science or IT preferred. 9+ years of experience as ETL Developer in creating jobs, tables, views that will be used to create reports dashboards for internal and external use.   KNOWLEDGE AND SKILLS REQUIREMENTS:   Experience in full-phase ETL implementation using tools such as Talend, Informatica, Data Services or other ETL tools need required to work with Galaxy data sources. Experience in analysis, design and implementation of new reporting tables or views using various ETL tools associated with the Galaxy data sources. Experience with ETL process management, development, data modeling, warehouse architecture and testing the data as well as in maintaining it at the enterprise level as an Administrator. Experience in technical implementations of business intelligence applications that deliver business decision making capabilities. Strong SQL knowledge with scripting languages to deal with large data sets and the ability to create ad-hoc SQL reports for validation Exposure to tools like Qlik, Power BI, Business Objects, Tableau etc. is desirable for validation Galaxy data source and Hawk Marketplace projects allow for a more flexible work schedule   JOB RESPONSIBILITIES:   Acting as ETL Developer expert to advice customers on best practices in creating and deploying reporting tables Creating functional and technical requirements as an input to application design business solution components and prototypes. Design and develop high value views to be used in reports and dashboards. Delivering high quality business intelligence solutions to our internal customers. Developing business intelligence applications for data analysis optimized for the best performance and scalability requirements using various reporting tools like Qlik. Interacting with business leaders to understand business strategy, conditions, and being able to frame problems Provide on-call support to reporting platform and the supporting tables, views, and supporting data sources.

Job posted by
apply for job
apply for job
Arun Jayaraman picture
Arun Jayaraman
Job posted by
Arun Jayaraman picture
Arun Jayaraman
Apply for job
apply for job

Software Architect/CTO

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
6 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Coordinate and provide experience-based solution for teams deploying business intelligence, data platform, and big data solutions. Strong knowledge of data warehousing and big data / analytics platform solutions and how data architecture fits into larger data warehousing and database implementation projects as a major component of the effort. Ability to guide C-level executives/ chief architects at major clients (Fortune 500) in data architecture decisions. Experience mapping out enterprise architecture transformations over a 3 year period, and leading those implementations. Develop, implement and maintain data architecture best practices and standards Utilizing data architecture best practices and standards, define and implement technical solutions in the movement of data throughout an organization Provide leadership in related technical areas of importance such as Business Intelligence Reporting and Analytics Gather requirements for data architecture through the use of business and technical interviews and round-table discussions Evaluate and make decisions regarding the alternative processes that can be followed in data movement throughout an organization: ETL, SOA / Web Services, Bulk Load, Evaluate and make decisions regarding the alternative tools and platforms that can be used to perform activities around data collection, data distribution and reporting Show experience with the concepts of data modeling for both transaction based systems and reporting / data warehouse type systems Evaluate data related requirements around data quality and master data management and understand and articulate how these factors apply to data architecture Understand the concepts of data quality, data ownership, and data governance, and understand how they apply within a data architecture framework 15+ years experience in IT, 10+ years experience with data related positions and responsibilities Excellent knowledge of multiple toolsets: ETL tools, reporting tools, data quality, metadata management, multiple database management systems, cloud, security, MDM tools.  (Anything Insights & Data service line may support in future.) Bachelors degree or equivalent in Computer Science, Information Systems or related field Experience in architecting, designing, developing and implementing project work within highly-visible data-driven applications in very large data warehousing / data repository environments with complex processing requirements A proven track record in system design and performance Demonstrated experience integrating systems in multi-user, multi-platform, multitasking operating systems environments Working knowledge of relational databases such as Oracle, DB2, SQL Server, etc. Ability to advocate ideas and to objectively participate in design critique Ideally the candidate should also have: Superb team building skills with a predisposition to building consensus and achieving goals through collaboration rather than direct line authority A positive, results oriented style, evidenced by listening, motivating, delegating, influencing, and monitoring the work being done Strong interpersonal/communication skills with the professional staff, senior level executives, and the business community at large - Experience delivering enterprise architecture for data & analytics for Fortune 500 companies - Ability to lead client architect leadership (CIO, Chief Architect) - Broad understanding of data platforms and tools (Cloud Platforms, Infra, Security, Data Movement, Data Engineering, Visualization, MDM, DQ, Lineage) and proven experience deploying architectures for the largest clients globally - Strong communication and facilitation skills (ability to manage workshops with 20-30 client technical resources) Ability to interface with CIO Train client on cloud, Enterprise Data Platform, Capgemini POV for Data (Business DL, Perform AI, Factory Model etc.)

Job posted by
apply for job
apply for job
Sanjay Biswakarma picture
Sanjay Biswakarma
Job posted by
Sanjay Biswakarma picture
Sanjay Biswakarma
Apply for job
apply for job

Data Engineer and Data Bricks

Founded 2013
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Insurance P&C and Specialty domain experience a plus Experience in a cloud-based architecture preferred, such as Databricks, Azure Data Lake, Azure Data Factory, etc. Strong understanding of ETL fundamentals and solutions. Should be proficient in writing advanced / complex SQL, expertise in performance tuning and optimization of SQL queries required. Strong experience in Python/PySpark and Spark SQL Experience in troubleshooting data issues, analyzing end to end data pipelines, and working with various teams in resolving issues and solving complex problems. Strong experience developing Spark applications using PySpark and SQL for data extraction, transformation, and aggregation from multiple formats for analyzing & transforming the data to uncover insights and actionable intelligence for internal and external use

Job posted by
apply for job
apply for job
Manjunath Multirecruit picture
Manjunath Multirecruit
Job posted by
Manjunath Multirecruit picture
Manjunath Multirecruit
Apply for job
apply for job

QA Engineer (ETL Testing)

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Must Have– Pls look for profiles with ETL testing background (2-6 years’ experience); but the below 3 skill requirements are mandatory Python scripting/PySpark experience SQL querying Data Warehousing implementation experience Good to Have Knowledge of working on Azure Databricks using PySpark Knowledge of working ADLS Gen 2 - Delta Lake (Delta Tables) Any ETL job orchestration experience Experience in Agile methodologies Code migration to QA/UAT/PROD through DevOps

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Business Analyst(Telecom)- For Career break Candidates

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Gurgaon
Experience icon
4 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

· Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy· Prior experience of statistical modelling techniques, AI/ML models etc. will be value add· Working knowledge of reporting packages (Business Objects, QLIK Power BI etc.), ETL frameworks will be an advantage.· Knowledge of statistics and experience using statistical packages for analysing datasets (MS Excel, SPSS, SAS etc.)· Experience on Python, R and other scripting languages is desirable, but not must

Job posted by
apply for job
apply for job
Rampriya K picture
Rampriya K
Job posted by
Rampriya K picture
Rampriya K
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.   Requirements 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models. Experience using Python to automate ETL/Data Processes jobs. Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment. Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration. Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation. Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance. Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments Solid experience with data modeling, business logic, and RESTful APIs. Solid experience in the Linux environment. Experience with NoSQL / PostgreSQL preferred Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies). Experience with NGINX and SSL. Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.

Job posted by
apply for job
apply for job
Pavel Gupta picture
Pavel Gupta
Job posted by
Pavel Gupta picture
Pavel Gupta
Apply for job
apply for job

Data Engineer

Founded 2002
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
via PayU
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Role: Data Engineer  Company: PayU Location: Bangalore/ Mumbai Experience : 2-5 yrs About Company:PayU is the payments and fintech business of Prosus, a global consumer internet group and one of the largest technology investors in the world. Operating and investing globally in markets with long-term growth potential, Prosus builds leading consumer internet companies that empower people and enrich communities. The leading online payment service provider in 36 countries, PayU is dedicated to creating a fast, simple and efficient payment process for merchants and buyers. Focused on empowering people through financial services and creating a world without financial borders where everyone can prosper, PayU is one of the biggest investors in the fintech space globally, with investments totalling $700 million- to date. PayU also specializes in credit products and services for emerging markets across the globe. We are dedicated to removing risks to merchants, allowing consumers to use credit in ways that suit them and enabling a greater number of global citizens to access credit services. Our local operations in Asia, Central and Eastern Europe, Latin America, the Middle East, Africa and South East Asia enable us to combine the expertise of high growth companies with our own unique local knowledge and technology to ensure that our customers have access to the best financial services. India is the biggest market for PayU globally and the company has already invested $400 million in this region in last 4 years. PayU in its next phase of growth is developing a full regional fintech ecosystem providing multiple digital financial services in one integrated experience. We are going to do this through 3 mechanisms: build, co-build/partner; select strategic investments.  PayU supports over 350,000+ merchants and millions of consumers making payments online with over 250 payment methods and 1,800+ payment specialists. The markets in which PayU operates represent a potential consumer base of nearly 2.3 billion people and a huge growth potential for merchants.  Job responsibilities: Design infrastructure for data, especially for but not limited to consumption in machine learning applications  Define database architecture needed to combine and link data, and ensure integrity across different sources  Ensure performance of data systems for machine learning to customer-facing web and mobile applications using cutting-edge open source frameworks, to highly available RESTful services, to back-end Java based systems  Work with large, fast, complex data sets to solve difficult, non-routine analysis problems, applying advanced data handling techniques if needed  Build data pipelines, includes implementing, testing, and maintaining infrastructural components related to the data engineering stack. Work closely with Data Engineers, ML Engineers and SREs to gather data engineering requirements to prototype, develop, validate and deploy data science and machine learning solutions Requirements to be successful in this role:  Strong knowledge and experience in Python, Pandas, Data wrangling, ETL processes, statistics, data visualisation, Data Modelling and Informatica. Strong experience with scalable compute solutions such as in Kafka, Snowflake Strong experience with workflow management libraries and tools such as Airflow, AWS Step Functions etc.  Strong experience with data engineering practices (i.e. data ingestion pipelines and ETL)  A good understanding of machine learning methods, algorithms, pipelines, testing practices and frameworks  Preferred) MEng/MSc/PhD degree in computer science, engineering, mathematics, physics, or equivalent (preference: DS/ AI)  Experience with designing and implementing tools that support sharing of data, code, practices across organizations at scale

Job posted by
apply for job
apply for job
Vishakha Sonde picture
Vishakha Sonde
Job posted by
Vishakha Sonde picture
Vishakha Sonde
Apply for job
apply for job

SQL- DWH Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Work Days: Sunday through ThursdayWeek off: Friday & SaurdayDay Shift.Key responsibilities: Creating, designing and developing data models Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures Validating results and creating business reports Monitoring and tuning data loads and queries Develop and prepare a schedule for a new data warehouse Analyze large databases and recommend appropriate optimization for the same Administer all requirements and design various functional specifications for data Provide support to the Software Development Life cycle Prepare various code designs and ensure efficient implementation of the same Evaluate all codes and ensure the quality of all project deliverables Monitor data warehouse work and provide subject matter expertise Hands-on BI practices, data structures, data modeling, SQL skills Hard Skills for a Data Warehouse Developer: Hands-on experience with ETL tools e.g., DataStage, Informatica, Pentaho, Talend Sound knowledge of SQL Experience with SQL databases such as Oracle, DB2, and SQL Experience using Data Warehouse platforms e.g., SAP, Birst Experience designing, developing, and implementing Data Warehouse solutions Project management and system development methodology Ability to proactively research solutions and best practices Soft Skills for Data Warehouse Developers: Excellent Analytical skills Excellent verbal and written communications Strong organization skills Ability to work on a team, as well as independently

Job posted by
apply for job
apply for job
Priyanka U picture
Priyanka U
Job posted by
Priyanka U picture
Priyanka U
Apply for job
apply for job

Senior Support Engineer - App Ops / Data Ops

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 12 years
Salary icon
Best in industryBest in industry

Required: 5-10 years of experience in Application and/or Data Operations Support domain. Expertise in doing RCA (root-cause analysis) and collaborating with development teams for CoE (correction of errors). Good communication & collaboration skills - liaison with product, operations & business teams to understand the requirements and provide data extracts & reports on need basis. Experience in working in an enterprise environment, with a good discipline & adherence to the SLA. Good understanding of the ticketing tools, to track the various requests and manage the lifecycle for multiple requests e.g. JIRA, Service-Now, Rally, Change-Gear etc. Orientation towards addressing the root-cause for any issue i.e. collaborate and follow-up with development teams to ensure permanent fix & prevention is given high priority. Ability to create SOPs (system operating procedures) in Confluence/Wiki to ensure there is a good reference for the support team to utilise. Self-starter and a collaborator having the ability to independently acquire the knowledge required in succeeding the job. Ability to mentor & lead Data Ops team-members for high quality of customer experience and resolution of issues on timely basis. Adherence to a well-defined process for workflow with partner teams. Specifically for Data Ops Engineer role, following experience is required: BI, Reporting & Data Warehousing domain Experience in production support for Data queries - monitoring, analysis & triage of issues Experience in using BI tools like MicroStrategy, Qlik, Power BI, Business Objects Expertise in data-analysis & writing SQL queries to provide insights into the production data.  Experience with relational database (RDBMS) & data-mart technologies like DB2, RedShift, SQL Server, My SQL, Netezza etc. Ability to monitor ETL jobs in AWS stack with tools like Tidal, Autosys etc. Experience with Big data platforms like Amazon RedShift Responsibilities: Production Support (Level 2) Job failures resolution - re-runs based on SOPs Report failures root-cause analysis & resolution Address queries for existing Reports & APIs Ad-hoc data requests for product & business stakeholders: Transactions per day, per entity (merchant, card-type, card-category) Custom extracts Ability to track & report the health of the system Create matrix for issue volume Coordinate and setup an escalation workflow Provide status-reports on regular basis for stakeholders review

Job posted by
apply for job
apply for job
Srinivas Avanthkar picture
Srinivas Avanthkar
Job posted by
Srinivas Avanthkar picture
Srinivas Avanthkar
Apply for job
apply for job

Sr. SDET (Data engineering)

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 9 years
Salary icon
Best in industryBest in industry

About BlackHawk Network:Blackhawk Network is building a digital platform and products that bring people and brands together.  We facilitate cross channel payments via cash-in, cash-out and mobile payments. By leveraging blockchain, smart contracts, serverless technology, real time payment systems, we are unlocking the next million users through innovation.   Our employees are our biggest assets!  Come find out how we engage, with the biggest brands in the world.  We look for people who collaborate, who are inspirational, who have passion that can make a difference by working as a team while striving for global excellence. You can expect a strong investment in your professional growth, and a dedication to crafting a successful, sustainable career for you. Our teams are composed of highly talented and passionate 'A' players, who are also invested in mentoring and enabling the best qualities. Our vibrant culture and high expectations will kindle your passion and bring out the best in you!  As a leader in branded payments, we are building a strong diverse team and expanding in ASIA PACIFIC –we are hiring in Bengaluru, India! This is an amazing opportunity for problem solvers who want to be a part of an innovative and creative Engineering team that values your contribution to the company. If this role has your name written all over it, please contact us apply now with a resume so that we explore further and get connected. If you enjoy building world class payment applications, are highly passionate about pushing the boundaries of scale and availability on the cloud, leveraging the next horizon technologies, rapidly deliver features to production, make data driven decisions on product development, collaborate and innovate with like-minded experts, then this would be your ideal job. Blackhawk is seeking passionate backend engineers at all levels to build our next generation of payment systems on a public cloud infrastructure. Our team enjoys working together to contribute to meaningful work seen by millions of merchants worldwide.As a Senior SDET, you will work closely with data engineers to automate developed features and manual testing of the new data ETL Jobs, Data pipelines and Reports. You will be responsible for owning the complete architecture of automation framework and planning and designing automation for data ingestion, transformation and Reporting/Visualization. You will be building high-quality automation frameworks to cover end to end testing of the data platforms and ensure test data setup and pre-empt post production issues by high quality testing in the lower environments. You will get an opportunity to contribute at all levels of the test pyramid. You will also work with customer success and product teams to replicate post-production release issues. Key Qualifications Bachelor’s degree in Computer Science, Engineering or related fields 5+ years of experience testing data ingestion, visualization and info delivery systems. Real passion for data quality, reconciliation and uncovering hard to find scenarios and bugs. Proficiency in at least one programming language (preferably Python/Java) Expertise in end to end ETL (E.g. DataStage, Matillion) and BI platforms (E.g. MicroStrategy, PowerBI) testing and data validation Experience working with big data technologies such as Hadoop and MapReduce is desirable Excellent analytical, problem solving and communication skills. Self-motivated, results oriented and deadline driven. Experience with databases and data visualization and dashboarding tools would be desirable Experience working with Amazon Web Services (AWS) and Redshift is desirable Excellent knowledge of Software development lifecycle, testing Methodologies, QA terminology, processes, and tools Experience with automation using automation frameworks and tools, such as TestNG, JUnit and Selenium

Job posted by
apply for job
apply for job
Sandeep Madhavan picture
Sandeep Madhavan
Job posted by
Sandeep Madhavan picture
Sandeep Madhavan
Apply for job
apply for job

Data Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 2800000, duration: "undefined", currency: "INR", equity: false})}}

Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills

Job posted by
apply for job
apply for job
Keerthana k picture
Keerthana k
Job posted by
Keerthana k picture
Keerthana k
Apply for job
apply for job

ETL Talend developer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 19 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.

Job posted by
apply for job
apply for job
Shobha B K picture
Shobha B K
Job posted by
Shobha B K picture
Shobha B K
Apply for job
apply for job

Technical Architect/CTO

Founded 2019
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Paris
Experience icon
6 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1600000, max: 2200000, duration: "undefined", currency: "INR", equity: false})}}

Exciting opportunity for any contractor to work with a start-up firm which is into product cum service based industry. We are looking for someone who has got rich experience in below mentioned skills to join us immediately.This role is for 1 month where the person will be working from Client site in Paris to understand the system architecture and documenting them. Contract extension for this role will be purely on the performance of individual. Since the requirement is immediate and critical, we need someone who can join us soon and travel to Paris in December- Hands on experience handling multiple data sources/datasets- experience in data/BI architect role- Expert on SSIS, SSRS, SSAS- Should have knowledge writing MDX queries- Technical document preparation- Should have excellent communication- Process oriented- Strong project management- Should be able to think Out of the Box and provide ideas to have better solutions- Outstanding team player with positive attitude

Job posted by
apply for job
apply for job
Ajith Gopi picture
Ajith Gopi
Job posted by
Ajith Gopi picture
Ajith Gopi
Apply for job
apply for job

Senior BI & ETL Developer

Founded 1999
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
via Wibmo
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Critical Tasks and Expected Contributions/Results : The role will be primarily focused on the design, development and testing of ETL workflows (using Talend) as well as the batch management and error handling processes. Build Business Intelligence Applications using tools like Power BI. Additional responsibilities include the documentation of technical specifications and related project artefacts. - Gather requirement and propose possible ETL solutions for in-house designed Data Warehouse - Analyze & translate functional specifications & change requests into technical specifications. - Design and Creating star schema data models - Design, Build and Implement Business Intelligence Solutions using Power BI - Develop, implement & test ETL program logic. - Deployment and support any related issues Key Competency : - A good understanding of the concepts and best practices of data warehouse ETL design and be able to apply these suitably to solve specific business needs. - Expert knowledge of ETL tool like Talend - Have more than 8 years experience in designing and developing ETL work packages, and be able to demonstrate expertise in ETL tool- Talend - Knowledge of BI tools like Power BI is required - Ability to follow functional ETL specifications and challenge business logic and schema design where appropriate, as well as manage their time effectively. - Exposure to Performance tuning is essential - Good organisational skills. - Methodical and structured approach to design and development. - Good interpersonal skills.

Job posted by
apply for job
apply for job
Shirin AM picture
Shirin AM
Job posted by
Shirin AM picture
Shirin AM
Apply for job
apply for job

Database Architect

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
apply for job
apply for job
Rahul Malani picture
Rahul Malani
Job posted by
Rahul Malani picture
Rahul Malani
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done