Loading...

{{notif_text}}

Last chance to connect with exciting companies hiring right now - Register now!|L I V E{{days_remaining}} days {{hours_remaining}} hours left!

Hadoop Jobs in Bangalore (Bengaluru)

Explore top Hadoop Job opportunities in Bangalore (Bengaluru) for Top Companies & Startups. All jobs are added by verified employees who can be contacted directly below.

Application Developer/Product Developers

Founded 1995
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
6 - 12 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Specification: Location: Bangalore Designation: Senior Engineer/Tech Lead (Designation will be decided based up on the skills, CTC etc)   Qualification:  Bachelor's or master's degree in Computer Science or equivalent area 6-12 years of experience in software development building complex enterprise systems that involve large scale data processing Must have very good experience in any of the following languages such as Java, Scala, C# Hands-on experience with databases like SQL Server, PostgreSQL or similar is required Knowledge of document stores like Elasticsearch or MongoDB is desirable Hands-on experience with Big Data processing technologies like Hadoop/Spark is required Strong cloud infrastructure experience with AWS and / or Azure Experience with container technologies like Docker, Kubernetes Experiences of engineering practices such as code refactoring, design patterns, design driven development, continuous integration, building highly scalable applications, application security Knowledge of Agile software development process   What youll do: As a Sr. Engineer or Technical Lead, you will be involved in leading software development projects in a hands-on manner. You will spend about 70% of your time writing and reviewing code, creating software designs. Your expertise will expand into database design, core middle tier modules, performance tuning, cloud technologies, DevOps and continuous delivery domains.

Job posted by
apply for job
apply for job
Raji Arun picture
Raji Arun
Job posted by
Raji Arun picture
Raji Arun
Apply for job
apply for job

Senior ETL Developer

Founded 2018
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Nu-Pie
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1300000, duration: "undefined", currency: "INR", equity: false})}}

Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools. Experience with Data Management & data warehouse development Star schemas, Data Vaults, RDBMS, and ODS Change Data capture Slowly changing dimensions Data governance Data quality Partitioning and tuning Data Stewardship Survivorship Fuzzy Matching Concurrency Vertical and horizontal scaling ELT, ETL Spark, Hadoop, MPP, RDBMS Experience with Dev/OPS architecture, implementation and operation Hand's on working knowledge of Unix/Linux Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue. Complex ETL program design coding Experience in Shell Scripting, Batch Scripting. Good communication (oral & written) and inter-personal skills Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval. Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery. Propose good design & solutions and adherence to the best Design & Standard practices. Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks. Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques. Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies. Work with functional business analysts to ensure that application programs are functioning as defined.  Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence. Technologies (Select based on requirement) Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory Utilities for bulk loading and extracting Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala J/ODBC, JSON Data Virtualization Data services development Service Delivery - REST, Web Services Data Virtualization Delivery – Denodo   ELT, ETL Cloud certification Azure Complex SQL Queries   Data Ingestion, Data Modeling (Domain), Consumption(RDMS)

Job posted by
apply for job
apply for job
Jerrin Thomas picture
Jerrin Thomas
Job posted by
Jerrin Thomas picture
Jerrin Thomas
Apply for job
apply for job

Hadoop Developer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

1. Design and development of data ingestion pipelines.2. Perform data migration and conversion activities.3. Develop and integrate software applications using suitable developmentmethodologies and standards, applying standard architectural patterns, takinginto account critical performance characteristics and security measures.4. Collaborate with Business Analysts, Architects and Senior Developers toestablish the physical application framework (e.g. libraries, modules, executionenvironments).5. Perform end to end automation of ETL process for various datasets that arebeing ingested into the big data platform.

Job posted by
apply for job
apply for job
Harpreet kour picture
Harpreet kour
Job posted by
Harpreet kour picture
Harpreet kour
Apply for job
apply for job

Data Engineer

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
via slice
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

About slice slice is a fintech startup focused on India’s young population. We aim to build a smart, simple, and transparent platform to redesign the financial experience for millennials and bring success and happiness to people’s lives. Growing with the new generation is what we dream about and all that we want. We believe that personalization combined with an extreme focus on superior customer service is the key to build long-lasting relations with young people. About team/role In this role, you will have the opportunity to create a significant impact on our business & most importantly our customers through your technical expertise on data as we take on challenges that can reshape the financial experience for the next generation. If you are a highly motivated team player with a knack for problem solving through technology, then we have a perfect job for you. What you’ll do Work closely with Engineering and Analytics teams to assist in Schema Designing, Normalization of Databases, Query optimization etc. Work with AWS cloud services: S3, EMR, Glue, RDS Create new and improve existing infrastructure for ETL workflows from a wide variety of data sources using SQL, NoSQL and AWS big data technologies Manage and monitor performance, capacity and security of database systems and regularly perform server tuning and maintenance activities Debug and troubleshoot database errors Identify, design and implement internal process improvements; optimising data delivery, re-designing infrastructure for greater scalability, data archival Qualification: 2+ years experience working as a Data Engineer Experience with a scripting language -  PYTHON preferably Experience with Spark and Hadoop technologies. Experience with AWS big data tools is a plus. Experience with SQL and NoSQL databases technologies like Redshift, MongoDB, Postgres/MySQL, bigQuery, Casandra. Experience on Graph DB (Neo4j and OrientDB) and Search DB (Elastic Search) is a plus. Experience in handling ETL JOBS

Job posted by
apply for job
apply for job
Gunjan Sheth picture
Gunjan Sheth
Job posted by
Gunjan Sheth picture
Gunjan Sheth
Apply for job
apply for job

Data Scientist

Founded 2004
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune, Bengaluru (Bangalore)
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1000000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Who we are? Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realize the “Next” in the “Now” for our Clients. We specialize in Cloud Data Engineering, AI/Machine Learning, and Advanced Cloud infra techs such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to the cloud. What do we believe? Best practices are overrated Implementing best practices can only make one n ‘average’. Honesty and Transparency We believe in the naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead.  And our sales team comprises 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction : As a Senior Data Scientist, the candidate will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges, from developing new models using pre-existing components to enable current systems to be more intelligent. You should be able to train models using existing data and use them in the most creative manner to deliver the smartest experience to customers. You will have to develop sophisticated enterprise/cloud technology applications that push the threshold of intelligence in machines. Working on multiple projects at a time, you maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You thrive in a fast, high-energy environment and are able to balance multiple projects in real-time. The thrill of the next big challenge drives you, and when faced with an obstacle, you find clever solutions. You must have the ability and interest to work on a range of different types of projects and business processes and must have a background that demonstrates this Ability. Your bucket of undertaking : ● Collaborate with team members to develop new models to be used for classification problems ● Responsible for software profiling, performance tuning and analysis, and other general software engineering tasks ● Use independent judgment to take existing data and build new models from it ● Collaborate and provide technical guidance ● Coming up with new ideas, rapid prototyping, and converting prototypes into scalable products ● Conducting experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences ● Improving existing models and creating self-learning systems. ● Stakeholder management and leadership ● Decision making and problem-solving Fit Assessment : A Searce team member is a highly motivated individual with a phenomenal amount of passion and energy for whatever he/she engages in; Who respects honesty, integrity, initiative, and creative approach to problem-solving; An inspiration to colleagues, he/she is a tenacious, and highly driven professional with a proven record of success and with a strong empathy for people - clients, partners, colleagues or vendors. Accomplishment Set : ● Extensive experience with Hadoop and Machine learning algorithms ● Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them ● Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence) ● Passion for solving NLP problems ● Experience with specialized tools and project for working with natural language Processing ● Programming experience in Python ● Knowledge of machine learning frameworks like Tensorflow, Pytorch ● Experience with software version control systems like Github ● Fast learner and be able to work independently as well as in a team environment      with good written and verbal communication skills Education and Experience : ● B. E. / B. Tech / Masters in Computer Science ● Strong in academics and good aptitude ● Excellent communication skills with a flair to learn quickly ● 4-8 years of relevant experience ● Research and implement novel machine learning and statistical approaches ● Prior exposure to product development would be an added advantage

Job posted by
apply for job
apply for job
Adarsh Charles picture
Adarsh Charles
Job posted by
Adarsh Charles picture
Adarsh Charles
Apply for job
apply for job

Bigdata Engineer

Founded 2014
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1200000, duration: "undefined", currency: "INR", equity: false})}}

Roles and responsibilities:   Responsible for development and maintenance of applications with technologies involving Enterprise Java and Distributed  technologies. Experience in Hadoop, Kafka, Spark, Elastic Search, SQL, Kibana, Python, experience w/ machine learning and Analytics     etc. Collaborate with developers, product manager, business analysts and business users in conceptualizing, estimating and developing new software applications and enhancements.. Collaborate with QA team to define test cases, metrics, and resolve questions about test results. Assist in the design and implementation process for new products, research and create POC for possible solutions. Develop components based on business and/or application requirements Create unit tests in accordance with team policies & procedures Advise, and mentor team members in specialized technical areas as well as fulfill administrative duties as defined by support process Work with cross-functional teams during crisis to address and resolve complex incidents and problems in addition to assessment, analysis, and resolution of cross-functional issues.

Job posted by
apply for job
apply for job
Silita S picture
Silita S
Job posted by
Silita S picture
Silita S
Apply for job
apply for job

Data Engineer

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
via Draup
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

Job Description: The Big Data Engineer at Draup is responsible for building scalable techniques and processes for data storage, transformation and analysis. The role includes decision-making and implementation of the optimal, generic, and reusable data-platforms. You will work with a very proficient, smart and experienced team of developers, researchers and co-founders directly for all application use cases.   What You Will Do: Develop, maintain, test and evaluate big data solutions within the organisation. Build scalable architectures for data storage, transformation and analysis. Design and develop solutions which are scalable, generic and reusable. Build and execute data warehousing, mining and modelling activities using agile development techniques. Leading big data projects successfully from scratch to production. Creating a platform on top of stored data sources using a distributed processing environment like Spark for the users to perform any kind of ad-hoc queries with complete abstraction from the internal data points. Solve problems in robust and creative ways. Collaborate and work with Machine learning and harvesting teams. What You’ll Need: Proficient understanding of distributed computing principles. Must have good programming experience in Python. Proficiency in Apache Spark (PySpark) is a must. Experience with integration of data from multiple data sources. Experience in technologies like SQL and NoSQL data stores such as Mongodb. Good working Knowledge of MapReduce, HDFS, Amazon S3. Knowledge of Scala would be preferable. Should be able to think in a functional-programming style. Should have hands-on experience in tuning software for maximum performance. Ability to communicate complex technical concepts to both technical and non-technical audiences Takes ownership of all technical aspects of software development for assigned projects.

Job posted by
apply for job
apply for job
Farhana Shaik picture
Farhana Shaik
Job posted by
Farhana Shaik picture
Farhana Shaik
Apply for job
apply for job

Software Engineer

Founded 2015
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 2500000, duration: "undefined", currency: "INR", equity: false})}}

Key Skills: Big Data, Hadoop, Spark, Scala, Strong Java programming · Extensive Experience in Hadoop, Hive, HBase and Spark. · Hands-on Development Experience in Java and Spark with Scala using Maven. · Clear understanding of Hadoop DFS & Map Reduce Internal Operations · Clear understanding of Internal execution mechanism of Spark · In-depth understanding of Hive on Spark engine and clear understanding of internals of HBase · Strong Java programming concepts and clear design patterns understanding. · Experienced in implementing data munging, transformation and processing solutions using Spark. · Experienced in developing performance optimized Analytical Hive Queries executing against huge datasets. · Experience in HBase Data Model Designing and Hive Physical Storage Model Designing.

Job posted by
apply for job
apply for job
Staffio HR picture
Staffio HR
Job posted by
Staffio HR picture
Staffio HR
Apply for job
apply for job

Big Data Developer

Founded 2000
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Chennai, Pune
Experience icon
4 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Role Summary/Purpose:We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.   Requirements: The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment. Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc… Excellent knowledge in SQL & Linux Shell scripting Bachelors/Master’s/Engineering Degree from a well-reputed university. Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment Ability to manage a diverse and challenging stakeholder community Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.   Responsibilities Should works as a senior developer/individual contributor based on situations Should be part of SCRUM discussions and to take requirements Adhere to SCRUM timeline and deliver accordingly Participate in a team environment for the design, development and implementation Should take L3 activities on need basis Prepare Unit/SIT/UAT testcase and log the results Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time. Quality delivery and automation should be a top priority Co-ordinate change and deployment in time Should create healthy harmony within the team Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders

Job posted by
apply for job
apply for job
Rashmi Poovaiah picture
Rashmi Poovaiah
Job posted by
Rashmi Poovaiah picture
Rashmi Poovaiah
Apply for job
apply for job

Data Platform Engineer (SDE 1/2/3)

Founded 2014
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore)
Experience icon
3 - 8 years
Salary icon
Best in industryBest in industry

Why are we building Urban Company?   Organized service commerce is a large yet young industry in India. While India is a very large market for home and local services (~USD 50 Billion in retail spends) and expected to double in the next 5 years, there is no billion-dollar company in this segment today.   The industry is bare ~20 years old, with a sub-optimal market architecture typical of an unorganized market - fragmented supply side operated by middlemen. As a result, experiences are broken for both customers and service professionals, each largely relying upon word of mouth to discover the other. The industry can easily be 1.5-2x larger than it is today if the frictions in user and professional journeys are removed - and the experiences made more meaningful and joyful.   The Urban Company team is young and passionate, and we see a massive disruption opportunity in his industry. By leveraging technology, and a set of simple yet powerful processes, we wish to build a platform that can organize the world of services - and bring them to your finger-tips. We believe there is the immense value (akin to serendipity) in bringing together customers and professionals looking for each other. In the process, we hope to impact the lives of millions of service entrepreneurs, and transform service commerce the way Amazon transformed product commerce.   Urbancompany has grown 3x YOY and so as our tech stack. We have evolved in data-driven approach solving for the product over the last few years. We deal with around 10TB in data analytics with around 50Mn/day.  We adopted the platform thinking pretty early stage of UC. We started building central platform teams who are dedicated to solving core engineering problems around 2-3 years ago and now it has evolved to a full-fledged vertical. Out platform vertical majorly includes Data Engineering, Service and Core Platform, Infrastructure, and Security. We are looking for Data Engineers, a person who loves solving standardization, has strong platform thinking, opinions, and has solved for Data Engineering, Data Science and analytics platforms.   Job Responsibilities Platform first approach to engineering problems. Creating highly autonomous systems with minimal manual intervention. Frameworks which can be extended to larger audiences through open source. Extending and modifying the open source projects to adopt as per Urban Company use case. Developer productivity. Highly abstracted and standardized frameworks like micro services, event-driven architecture, etc.   Job Requirements/Potential Backgrounds Bachelors/master’s in computer science form top-tier Engineering School. Experience with Data pipeline and workflow management tools like Luigi, Airflow etc. Proven ability to work in a fast paced environment. History and Familiarity of server-side development of APIs, databases, dev-ops and systems. Fanatic about building scalable, reliable data products. Experience with Big data tools: Hadoop, Kafka/Kinesis, Flume, etc. is an added advantage. Experience with Relational SQL and NO SQL databases like HBase, Cassandra etc. Experience with stream processing engines like Spark, Link, Storm, etc. is an added advantage.   What UC has in store for you   A phenomenal work environment, with massive ownership and growth opportunities. A high performance, high-velocity environment at the cutting edge of growth. Strong ownership expectation and freedom to fail. Quick iterations and deployments – fail-fast attitude. Opportunity to work on cutting edge technologies. The massive, and direct impact of the work you do on the lives of people.

Job posted by
apply for job
apply for job
Mohit Agrawal picture
Mohit Agrawal
Job posted by
Mohit Agrawal picture
Mohit Agrawal
Apply for job
apply for job

Data Engineer (Remote/ Bengaluru)

Founded 2016
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
2 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 200000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job Title: Data Engineer (Remote)   Job Description You will work on:   We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights.    What you will do (Responsibilities): Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines.  You will be responsible for: Dealing with customer data and building highly efficient pipelines Building insights dashboards Troubleshooting data loss, data inconsistency, and other data-related issues Maintaining backend services (written in Golang) for metadata generation Providing prompt support and solutions for Product, CRM, and Marketing partners   What you bring (Skills): 2+ year of experience in data engineering Coding experience with one of the following languages: Golang, Java, Python, C++ Fluent in SQL Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive   Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc.   Advantage Cognologix:  Higher degree of autonomy, startup culture & small teams  Opportunities to become expert in emerging technologies  Remote working options for the right maturity level  Competitive salary & family benefits  Performance based career advancement     About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way.   Skills: JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA   Thanks & regards, Cognologix- HR Dept.

Job posted by
apply for job
apply for job
Rupa Kadam picture
Rupa Kadam
Job posted by
Rupa Kadam picture
Rupa Kadam
Apply for job
apply for job

Data Science Engineer (SDE I)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 3 years
Salary icon
Best in industry{{renderSalaryString({min: 1200000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 1400000, duration: "undefined", currency: "INR", equity: false})}}

Roles and Responsibilities:• Responsible for developing and maintaining applications with PySpark  • Contribute to the overall design and architecture of the application developed and deployed. • Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc. • Interact with business users to understand requirements and troubleshoot issues. • Implement Projects based on functional specifications.Must Have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skills

Job posted by
apply for job
apply for job
Sudarshini K picture
Sudarshini K
Job posted by
Sudarshini K picture
Sudarshini K
Apply for job
apply for job